Solutions Manual For Linear Algebra And Its Applications (4th Edition)

Solutions%20Manual%20for%20Linear%20Algebra%20and%20Its%20Applications%20(4th%20Edition)

Solutions%20Manual%20for%20Linear%20Algebra%20and%20Its%20Applications%20(4th%20Edition)

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 485 [warning: Documents this large are best viewed by clicking the View PDF Link!]

INSTRUCTOR
SOLUTIONS
MANUAL
INSTRUCTORS
SOLUTIONS MANUAL
THOMAS POLASKI
Winthrop University
JUDITH MCDONALD
Washington State University
LINEAR ALGEBRA
AND ITS APPLICATIONS
FOURTH EDITION
David C. Lay
University of Maryland
The author and publisher of this book have used their best efforts in preparing this book. These efforts include the
development, research, and testing of the theories and programs to determine their effectiveness. The author and
publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation
contained in this book. The author and publisher shall not be liable in any event for incidental or consequential
damages in connection with, or arising out of, the furnishing, performance, or use of these programs.
Reproduced by Pearson Addison-Wesley from electronic files supplied by the author.
Copyright © 2012, 2006, 1997 Pearson Education, Inc.
Publishing as Pearson Addison-Wesley, 75 Arlington Street, Boston, MA 02116.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any
form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written
permission of the publisher. Printed in the United States of America.
ISBN-13: 978-0-321-38888-9
ISBN-10: 0-321-38888-7
1 2 3 4 5 6 BB 15 14 13 12 11
iii
_____________________________________________________
Contents
CHAPTER 1 Linear Equations in Linear Algebra 1
CHAPTER 2 Matrix Algebra 87
CHAPTER 3 Determinants 167
CHAPTER 4 Vector Spaces 197
CHAPTER 5 Eigenvalues and Eigenvectors 273
CHAPTER 6 Orthogonality and Least Squares 357
CHAPTER 7 Symmetric Matrices and Quadratic Forms 405
CHAPTER 8 The Geometry of Vector Spaces 453
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1
1.1 SOLUTIONS
Notes:
The key exercises are 7 (or 11 or 12), 19–22, and 25. For brevity, the symbols R1, R2,…, stand
for row 1 (or equation 1), row 2 (or equation 2), and so on. Additional notes are at the end of the section.
1.
12
12
57
27 5
xx
xx
+=
−− =
157
275
ª
º
«
»
−−−
¬
¼
Replace R2 by R2 + (2)R1 and obtain:
12
2
57
39
xx
x
+=
=
157
039
ª
º
«
»
¬
¼
Scale R2 by 1/3:
12
2
57
3
xx
x
+=
=
157
013
ª
º
«
»
¬
¼
Replace R1 by R1 + (–5)R2:
1
2
8
3
x
x
=
=
10 8
01 3
º
»
¼
The solution is (x
1
, x
2
) = (–8, 3), or simply (–8, 3).
2.
12
12
36 3
57 10
xx
xx
+=
+=
36 3
5710
ª
º
«
»
¬
¼
Scale R1 by 1/3 and obtain:
12
12
21
57 10
xx
xx
+=
+=
12 1
5710
ª
º
«
»
¬
¼
Replace R2 by R2 + (–5)R1:
12
2
21
315
xx
x
+=
=
121
0315
ª
º
«
»
¬
¼
Scale R2 by –1/3:
12
2
21
5
xx
x
+=
=
12 1
01 5
º
»
¼
Replace R1 by R1 + (–2)R2:
1
2
9
5
x
x
=
=
10 9
01 5
º
»
¼
The solution is (x
1
, x
2
) = (9, –5), or simply (9, –5).
2 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
3. The point of intersection satisfies the system of two linear equations:
12
12
24
1
xx
xx
+=
=
124
111
ªº
«»
¬¼
Replace R2 by R2 + (–1)R1 and obtain:
12
2
24
33
xx
x
+=
=
124
033
ª
º
«
»
−−
¬
¼
Scale R2 by –1/3:
12
2
24
1
xx
x
+=
=
124
011
ª
º
«
»
¬
¼
Replace R1 by R1 + (–2)R2:
1
2
2
1
x
x
=
=
102
011
ª
º
«
»
¬
¼
The point of intersection is (x
1
, x
2
) = (2, 1).
4. The point of intersection satisfies the system of two linear equations:
12
12
213
32 1
xx
xx
+=
=
1213
32 1
ªº
«»
¬¼
Replace R2 by R2 + (–3)R1 and obtain:
12
2
213
840
xx
x
+=
=
1213
0840
ª
º
«
»
¬
¼
Scale R2 by –1/8:
12
2
213
5
xx
x
+=
=
12 13
01 5
ª
º
«
»
¬
¼
Replace R1 by R1 + (–2)R2:
1
2
3
5
x
x
=
=
10 3
01 5
º
»
¼
The point of intersection is (x
1
, x
2
) = (–3, –5).
5. The system is already in “triangular” form. The fourth equation is x
4
= –5, and the other equations do
not contain the variable x
4
. The next two steps should be to use the variable x
3
in the third equation to
eliminate that variable from the first two equations. In matrix notation, that means to replace R2 by
its sum with –4 times R3, and then replace R1 by its sum with 3 times R3.
6. One more step will put the system in triangular form. Replace R4 by its sum with –4 times R3, which
produces
164 0 1
02704
00123
000714
−−
ªº
«»
«»
«»
«»
¬¼
. After that, the next step is to scale the fourth row by –1/7.
7. Ordinarily, the next step would be to interchange R3 and R4, to put a 1 in the third row and third
column. But in this case, the third row of the augmented matrix corresponds to the equation 0 x
1
+ 0
x
2
+ 0 x
3
= 1, or simply, 0 = 1. A system containing this condition has no solution. Further row
operations are unnecessary once an equation such as 0 = 1 is evident. The solution set is empty.
1.1 Solutions 3
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
8. The standard row operations are:
15400 15400 15400 15400
01010 01010 01000 01000
~~~
00300 00300 00300 00100
00020 00010 00010 00010
15000 10000
01000 01000
~~
00100 00100
00010 00010
−−−−
ªºªºªºªº
«»«»«»«»
«»«»«»«»
«»«»«»«»
«»«»«»«»
¬¼¬¼¬¼¬¼
ªºªº
«»«»
«»«»
«»«
«»«
¬¼¬¼
»
»
The solution set contains one solution: (0, 0, 0, 0).
9. The system has already been reduced to triangular form. Begin by replacing R3 by R3 + (3)R4:
11005 11005
01207 01207
~
00 132 00 1014
000 14 00014
−−
ªºªº
«»«»
−− −
«»«»
«»«»
«»«»
¬¼¬¼
Next, replace R2 by R2 + (2)R3. Finally, replace R1 by R1 + R2:
11005 100016
010021 010021
~~
001014 001014
00014 00014
−−
ªºªº
«»«»
«»«»
«»«»
«»«»
¬¼¬¼
The solution set contains one solution: (16, 21, 14, 4).
10. The system has already been reduced to triangular form. Use the 1 in the fourth row to change the 3
and –2 above it to zeros. That is, replace R2 by R2 + (-3)R4 and replace R1 by R1 + (2)R4. For the
final step, replace R1 by R1 + (-3)R2.
130 2 7 1300 11 1000 47
010 3 6 0100 12 0100 12
~~
001 0 2 0010 2 0010 2
000 1 2 0001 2 0001 2
−− −
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
The solution set contains one solution: (–47, 12, 2, –2).
11. First, swap R1 and R2. Then replace R3 by R3 + (–2)R1. Finally, replace R3 by R3 + (1)R2.
015 4 143 2 1 4 3 2 143 2
143 2~0 15 4~0 1 5 4~0 15 4
271 2 271 2 0 1 5 2 000 2
−− −−
ªºªºª ºªº
«»«»« »«»
−− −−
«»«»« »«»
«»«»« »«»
−−−−
¬¼¬¼¬ ¼¬¼
The system is inconsistent, because the last row would require that 0 = –2 if there were a solution.
The solution set is empty.
4 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
12. Replace R2 by R2 + (–2)R1 and replace R3 by R3 + (2)R1. Finally, replace R3 by R3 + (3)R2.
1543 1543 1543
2732~0354~0354
217109157 0005
−− − −
ªºªºªº
«»«»«»
−− −
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
The system is inconsistent, because the last row would require that 0 = 5 if there were a solution.
The solution set is empty.
13.
10 3 8 10 3 8 10 3 8 10 3 8
22 9 7~0215 9~01 5 2~01 5 2
01 5 2 01 5 2 0215 9 00 5 5
−−−−
ªºªºªºªº
«»«»«»«»
−−−
«»«»«»«»
«»«»«»«»
−−−−
¬¼¬¼¬¼¬¼
10 3 8 100 5
~0 1 5 2~0 1 0 3
00 1 1 001 1
ªºªº
«»«»
«»«»
«»«»
−−
¬¼¬¼
. The solution is (5, 3, –1).
14.
20 6 8 10 3 4 10 3 4 10 3 4
01 2 3~01 2 3~01 2 3~01 2 3
36 2 4 36 2 4 06 7 8 00 5 10
−− −− −− −
ªºªºªºªº
«»«»«»«»
«»«»«»«»
«»«»«»«»
−− −− −−
¬¼¬¼¬¼¬¼
10 3 4 10 3 4 100 2
~0 1 2 3~0 1 0 1~0 1 0 1.
00 1 2 00 1 2 001 2
−− −−
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
¬¼¬¼¬¼
The solution is (2, –1, 2).
15. First, replace R3 by R3 + (1)R1, then replace R4 by R4 + (1)R2, and finally replace R4 by R4 + (–
1)R3.
16005 16005 16005 16005
01410 01410 01410 01410
~~~
16 153 00 158 00 158 00 158
01540 01540 00150 00008
−−−−
ªºªºªºªº
«»«»«»«»
−−−−
«»«»«»«»
«»«»«»«»
«»«»«»«»
−−
¬¼¬¼¬¼¬¼
The system is inconsistent, because the last row would require that 0 = –8 if there were a solution.
16. First replace R4 by R4 + (3/2)R1 and replace R4 by R4 + (–2/3)R2. (One could also scale R1 and R2
before adding to R4, but the arithmetic is rather easy keeping R1 and R2 unchanged.) Finally, replace
R4 by R4 + (–1)R3.
200 4 10 200 4 10 200 4 10 200 4 10
033 0 0 033 0 0 033 0 0 033 0 0
~~~
001 4 1 001 4 1 001 4 1 001 4 1
323 1 5 023 5 10 001 5 10 000 9 9
−− −− −− −−
ªºªºªºªº
«»«»«»«»
«»«»«»«»
«»«»«»«»
−−−
«»«»«»«»
−− − −
¬¼¬¼¬¼¬¼
The system is now in triangular form and has a solution. In fact, using the argument from Example 2,
one can see that the solution is unique.
1.1 Solutions 5
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
17. Row reduce the augmented matrix corresponding to the given system of three equations:
231 231 231
650~043~043
257 088 002
−−−
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
The third equation, 0 = 2, shows that the system is inconsistent, so the three lines have no point in
common.
18. Row reduce the augmented matrix corresponding to the given system of three equations:
24 4 4 2 4 4 4 24 4 4
01 2 2~0 1 2 2~01 2 2
23 0 0 0 1 4 4 00 6 6
ªºª ºªº
«»« »«»
−− −− −−
«»« »«»
«»« »«»
−−− −
¬¼¬ ¼¬¼
The system is consistent, and using the argument from Example 2, there is only one solution. So the
three planes have only one point in common.
19.
141 4
~
368 063 4
hh
h
ªºª º
«»« »
−−
¬¼¬ ¼
Write c for 6 – 3h. If c = 0, that is, if h = 2, then the system has no
solution, because 0 cannot equal –4. Otherwise, when h 2, the system has a solution.
20.
151 5
~
286 08216
hh
h
−−
ªºª º
«»« »
−−
¬¼¬ ¼
Write c for
82.h−−
If c = 0, that is, if h = –4, then the system
has no solution, because 0 cannot equal 16. Otherwise, when h –4, the system has a solution.
21.
14 2 1 4 2
~
360120hh
−−
ªºª º
«»« »
−−
¬¼¬ ¼
Write c for
12h
. Then the second equation cx
2
= 0 has a solution
for every value of c. So the system is consistent for all h.
22.
412
412 ~
263 003
2
h
h
h
ªº
ªº
«»
«»
«»
−− +
¬¼
¬¼
The system is consistent if and only if
32
h
+
= 0, that is, if
and only if h = 6.
23. a. True. See the remarks following the box titled Elementary Row Operations.
b. False. A 5 × 6 matrix has five rows.
c. False. The description applies to a single solution. The solution set consists of all possible
solutions. Only in special cases does the solution set consist of exactly one solution. Mark a
statement True only if the statement is always true.
d. True. See the box before Example 2.
24. a. False. The definition of row equivalent requires that there exist a sequence of row operations that
transforms one matrix into the other.
b. True. See the box preceding the subsection titled Existence and Uniqueness Questions.
c. False. The definition of equivalent systems is in the second paragraph after equation (2).
d. True. By definition, a consistent system has at least one solution.
6 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
25.
147 147 147
035 ~035 ~035
259 035 2 000 2
gg g
hh h
kkgkgh
−− −
ªºª ºª º
«»« »« »
−− −
«»« »« »
«»« »« »
−− +++
¬¼¬ ¼¬ ¼
Let b denote the number k + 2g + h. Then the third equation represented by the augmented matrix
above is 0 = b. This equation is possible if and only if b is zero. So the original system has a solution
if and only if k + 2g + h = 0.
26. Row reduce the augmented matrix for the given system:
24 12 /2 1 2 /2
~~
02 (/2)
ff f
cd g cd g d cgcf
ªºª ºª º
«»« »« »
−−
¬¼¬ ¼¬ ¼
This shows that d – 2c must be nonzero, since f and g are arbitary. Otherwise, for some choices of f
and g the second row would correspond to an equation of the form 0 = b, where b is nonzero. Thus
d 2c.
27. Row reduce the augmented matrix for the given system. Scale the first row by 1/a, which is possible
since a is nonzero. Then replace R2 by R2 + (–c)R1.
1/ / 1 / /
~~
0(/)(/)
abf bafa ba fa
cd g c d g dcba gcfa
ªºª ºª º
«»« »« »
−−
¬¼¬ ¼¬ ¼
The quantity dc(b/a) must be nonzero, in order for the system to be consistent when the quantity
gc( f /a) is nonzero (which can certainly happen). The condition that dc(b/a) 0 can also be
written as adbc 0, or ad bc.
28. A basic principle of this section is that row operations do not affect the solution set of a linear
system. Begin with a simple augmented matrix for which the solution is obviously (3, –2, –1), and
then perform any elementary row operations to produce other augmented matrices. Here are three
examples. The fact that they are all row equivalent proves that they all have the solution set (3, –2, –
1).
100 3 100 3 100 3
010 2~210 4~2104
001 1 001 1 2015
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
29. Swap R1 and R3; swap R1 and R3.
30. Multiply R3 by –1/5; multiply R3 by –5.
31. Replace R3 by R3 + (–4)R1; replace R3 by R3 + (4)R1.
32. Replace R3 by R3 + (–4)R2; replace R3 by R3 + (4)R2.
33. The first equation was given. The others are:
21 3213
(2040)/4,or4 60TT T TTT=+++ −−=
342 342
(4030)/4,or4 70TTT TTT=+++ −−=
413 413
(10 30) / 4, or 4 40TTT TTT=+++ −−=
1.1 Solutions 7
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Rearranging,
12 4
123
234
134
430
460
470
440
TT T
TTT
TTT
TTT
−−=
+=
+=
−−+=
34. Begin by interchanging R1 and R4, then create zeros in the first column:
410130 101440 101440
14 1060 14 1060 04 0 420
~~
014170 014170 014170
10 1440 4 10 130 0 1415190
−− −−
ªºªºªº
«»«»«»
−− −−
«»«»«»
«»«»«»
−− −− − −
«»«»«»
−− −− −
¬¼¬¼¬¼
Scale R1 by –1 and R2 by 1/4, create zeros in the second column, and replace R4 by R4 + R3:
10 14 40 10 14 40 1014 40
0101 5 0101 5 0101 5
~~~
014170 004275 004275
01415190 00414195 00012270
−− −− −−
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
−− − −
«»«»«»
−−
¬¼¬¼¬¼
Scale R4 by 1/12, use R4 to create zeros in column 4, and then scale R3 by 1/4:
10 1 4 40 10 10 50 10 10 50
010 1 5 010027.5 010027.5
~~~
004 2 75 0040 120 0010 30
000 122.5 000122.5 000122.5
−−
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
The last step is to replace R1 by R1 + (–1)R3:
100020.0
010027.5
~.
001030.0
000122.5
ªº
«»
«»
«»
«»
¬¼
The solution is (20, 27.5, 30, 22.5).
Notes:
The Study Guide includes a “Mathematical Note” about statements, “If … , then … .”
This early in the course, students typically use single row operations to reduce a matrix. As a result,
even the small grid for Exercise 34 leads to about 80 multiplications or additions (not counting operations
with zero). This exercise should give students an appreciation for matrix programs such as MATLAB.
Exercise 14 in Section 1.10 returns to this problem and states the solution in case students have not
already solved the system of equations. Exercise 31 in Section 2.5 uses this same type of problem in
connection with an LU factorization.
For instructors who wish to use technology in the course, the Study Guide provides boxed MATLAB
notes at the ends of many sections. Parallel notes for Maple, Mathematica, and the TI-83+/84+/89
calculators appear in separate appendices at the end of the Study Guide. The MATLAB box for Section
1.1 describes how to access the data that is available for all numerical exercises in the text. This feature
has the ability to save students time if they regularly have their matrix program at hand when studying
linear algebra. The MATLAB box also explains the basic commands replace, swap, and scale.
These commands are included in the text data sets, available from the text web site,
www.pearsonhighered.com/lay.
8 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1.2 SOLUTIONS
Notes:
The key exercises are 1–20 and 23–28. (Students should work at least four or five from Exercises
7–14, in preparation for Section 1.5.)
1. Reduced echelon form: a and b. Echelon form: d. Not echelon: c.
2. Reduced echelon form: a. Echelon form: b and d. Not echelon: c.
3.
124 8 12 4 8 12 4 8
246 8~00 2 8~00 1 4
36912 00 3 12 00 3 12
ªºª ºª º
«»« »« »
−−
«»« »« »
«»« »« »
−− −−
¬¼¬ ¼¬ ¼
1248 120 8
~0 0 1 4~0 0 1 4
0000 000 0
ªºª º
«»« »
«»« »
«»« »
¬¼¬ ¼
. Pivot cols 1 and 3.
124 8
246 8
36912
ª
º
«
»
«
»
«
»
¬
¼
4.
1245 1 2 4 5 1 2 4 5 1 2 4 5
2454~0 0 3 6~0 3 12 18~0 1 4 6
4542 0 3 12 18 0 0 3 6 0 0 3 6
ªºª ºª ºª º
«»« »« »« »
−− − −
«»« »« »« »
«»« »« »« »
−− − −−
¬¼¬ ¼¬ ¼¬ ¼
1245 124 5 100 1
~0 1 4 6~0 1 0 2~0 1 0 2
0012 001 2 001 2
ªºª ºª º
«»« »« »
−−
«»« »« »
«»« »« »
¬¼¬ ¼¬ ¼
.
Pivot cols
1, 2, and 3
1245
2454
4542
ªº
«»
«»
«»
¬¼
5. **0
,,
00000
ªºªºªº
«»«»«»
¬¼¬¼¬¼
 
6.
**0
0,00,00
00 0000
ª
ºª ºª º
«
»« »« »
«
»« »« »
«
»« »« »
¬
¼¬ ¼¬ ¼
 
7. 1347 13 4 7 1347 130 5
~~~
397 6 00 5 15 00 13 00 1 3
ªºªºªºªº
«»«»«»«»
−−
¬¼¬¼¬¼¬¼
Corresponding system of equations:
12
3
35
3
xx
x
+=
=
The basic variables (corresponding to the pivot positions) are x
1
and x
3
. The remaining variable x
2
is
free. Solve for the basic variables in terms of the free variable. The general solution is
12
2
3
53
is free
3
xx
x
x
=−−
°
®
°=
¯
Note:
Exercise 7 is paired with Exercise 10.
1.2 Solutions 9
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
8. 1305130513051004
~~~
3709020601030103
−− − −
ªºªºªºªº
«»«»«»«»
−−
¬¼¬¼¬¼¬¼
Corresponding system of equations:
1
2
4
3
x
x
=
=
The basic variables (corresponding to the pivot positions) are x
1
and x
2
. The remaining variable x
3
is
free. Solve for the basic variables in terms of the free variable. In this particular problem, the basic
variables do not depend on the value of the free variable.
General solution:
1
2
3
4
3
is free
x
x
x
=
°=
®
°
¯
Note:
A common error in Exercise 8 is to assume that x
3
is zero. To avoid this, identify the basic
variables first. Any remaining variables are free. (This type of computation will arise in Chapter 5.)
9. 0123 1346 1023
~~
1346 0 12 3 0123
−−
ªºªºªº
«»«»«»
−− −
¬¼¬¼¬¼
Corresponding system:
13
23
23
23
xx
xx
=
=
Basic variables: x
1
, x
2
; free variable: x
3
. General solution:
13
23
3
32
32
is free
xx
xx
x
=+
°=+
®
°
¯
10. 12 14 12 14 1202
~~
2456 00714 0012
−− −
ªºªºªº
«»«»«»
−− −
¬¼¬¼¬¼
Corresponding system:
12
3
22
2
xx
x
=
=
Basic variables: x
1
, x
3
; free variable: x
2
. General solution:
12
2
3
22
is free
2
xx
x
x
=+
°
®
°=
¯
11.
3240 3240 123430
96120~0000~0 0 0 0
6480 0000 0 0 0 0
−− −
ªºª ºª º
«»« »« »
«»« »« »
«»« »« »
¬¼¬ ¼¬ ¼
Corresponding system:
123
24
0
33
00
00
xx x+=
=
=
10 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Basic variable: x
1
; free variables x
2
, x
3
. General solution:
123
2
3
24
33
is free
is free
xx
x
x
x
=
°
°
®
°
°
¯
12. Since the bottom row of the matrix is equivalent to the equation 0 = 1, the system has no solutions.
13.
130102 130092 100035
010041010041010041
~~
000194 000194 000194
000000 000000 000000
−−− −
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
Corresponding system:
15
25
45
35
41
94
00
xx
xx
xx
=
=
+=
=
Basic variables: x
1
, x
2
, x
4
; free variables: x
3
, x
5
. General solution:
15
25
3
45
5
53
14
is free
49
is free
xx
xx
x
xx
x
=+
°=+
°
°
®
°=
°
°
¯
Note:
The Study Guide discusses the common mistake x
3
= 0.
14.
10 5 0 83 10 5 003
01 4 1 06 01 4 106
~
00 0 0 10 00 0 010
00 0 0 00 00 0 000
−− −
ªºªº
«»«»
−−
«»«»
«»«»
«»«»
¬¼¬¼
Corresponding system:
13
234
5
53
46
0
00
xx
xxx
x
=
+=
=
=
Basic variables: x
1
, x
2
, x
5
; free variables: x
3
, x
4
. General solution:
13
234
3
4
5
35
64
is free
is free
0
xx
xxx
x
x
x
=+
°=+
°
°
®
°
°
=
°
¯
15. a. The system is consistent. There are many solutions because x
3
is a free variable.
b. The system is consistent. There are many solutions because x
1
is a free variable.
16. a. The system is inconsistent. (The rightmost column of the augmented matrix is a pivot column).
1.2 Solutions 11
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b. The system is consistent. There are many solutions because x
2
is a free variable.
17. 114 11 4
~
23 01 8hh
−−
ªºª º
«»« »
+
¬¼¬ ¼
The system has a solution for all values of h since the augmented
column cannot be a pivot column.
18. 13 1 1 3 1
~
62 036 2hhh
−−
ªºª º
«»« »
+−−
¬¼¬ ¼
If 3h + 6 is zero, that is, if h = –2, then the system has a
solution, because 0 equals 0. When
2,h≠−
the system has a solution since the augmented column
cannot be a pivot column. Thus the system has a solution for all values of h.
19.
121 2
~
48 084 8
hh
khk
ªºª º
«»« »
−−
¬¼¬ ¼
a. When h = 2 and 8,k the augmented column is a pivot column, and the system is inconsistent.
b. When
2,h
the system is consistent and has a unique solution. There are no free variables.
c. When h = 2 and k = 8, the system is consistent and has many solutions.
20.
131 1 3 1
~
2062hk h k
−−
ªºª º
«»« »
+
¬¼¬ ¼
a. When h = –6 and 2
,
k the system is inconsistent, because the augmented column is a pivot
column.
b. When 6,h≠− the system is consistent and has a unique solution. There are no free variables.
c. When h = –6 and k = 2, the system is consistent and has many solutions.
21. a. False. See Theorem 1.
b. False. See the second paragraph of the section.
c. True. Basic variables are defined after equation (4).
d. True. This statement is at the beginning of Parametric Descriptions of Solution Sets.
e. False. The row shown corresponds to the equation 5x
4
= 0, which does not by itself lead to a
contradiction. So the system might be consistent or it might be inconsistent.
22. a. True. See Theorem 1.
b. False. See Theorem 2.
c. False. See the beginning of the subsection Pivot Positions. The pivot positions in a matrix are
determined completely by the positions of the leading entries in the nonzero rows of any echelon
form obtained from the matrix.
d. True. See the paragraph just before Example 4.
e. False. The existence of at least one solution is not related to the presence or absence of free
variables. If the system is inconsistent, the solution set is empty. See the solution of Practice
Problem 2.
12 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
23. Since there are four pivots (one in each row), the augmented matrix must reduce to the form
1
2
3
4
1000
0100 and so
0010
0001
xa
a
xb
b
xc
c
xd
d
=
ªº
«» =
«»
«» =
«» =
¬¼
No matter what the values of a, b, c, and d, the solution exists and is unique.
24. The system is consistent because there is not a pivot in column 5, which means that there is not a row
of the form [0 0 0 0 1]. Since the matrix is the augmented matrix for a system, Theorem 2 shows
that the system has a solution.
25. If the coefficient matrix has a pivot position in every row, then there is a pivot position in the bottom
row, and there is no room for a pivot in the augmented column. So, the system is consistent, by
Theorem 2.
26. Since the coefficient matrix has three pivot columns, there is a pivot in each row of the coefficient
matrix. Thus the augmented matrix will not have a row of the form [0 0 0 0 0 1], and the
system is consistent.
27. “If a linear system is consistent, then the solution is unique if and only if every column in the
coefficient matrix is a pivot column; otherwise there are infinitely many solutions.
This statement is true because the free variables correspond to nonpivot columns of the coefficient
matrix. The columns are all pivot columns if and only if there are no free variables. And there are no
free variables if and only if the solution is unique, by Theorem 2.
28. Every column in the augmented matrix except the rightmost column is a pivot column, and the
rightmost column is not a pivot column.
29. An underdetermined system always has more variables than equations. There cannot be more basic
variables than there are equations, so there must be at least one free variable. Such a variable may be
assigned infinitely many different values. If the system is consistent, each different value of a free
variable will produce a different solution, and the system will not have a unique solution. If the
system is inconsistent, it will not have any solution.
30. Example:
123
123
4
22 25
xx x
xx x
++=
++=
31. Yes, a system of linear equations with more equations than unknowns can be consistent.
Example (in which x
1
= x
2
= 1):
12
12
12
2
0
32 5
xx
xx
xx
+=
=
+=
32. According to the numerical note in Section 1.2, when n = 20 the reduction to echelon form takes
about 2(20)
3
/3 5,333 flops, while further reduction to reduced echelon form needs at most (20)
2
=
400 flops. Of the total flops, the “backward phase” is about 400/5733 = .07 or about 7%. When n =
200, the estimates are 2(200)
3
/3 5,333,333 flops for the reduction to echelon form and (200)
2
=
40,000 flops for the backward phase. The fraction associated with the backward phase is about
(4×10
4
) /(5.3×10
6
) = .007, or about .7%.
1.2 Solutions 13
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
33. For a quadratic polynomial p(t) = a
0
+ a
1
t + a
2
t
2
to exactly fit the data (1, 6), (2, 15), and (3, 28), the
coefficients a
0
, a
1
, a
2
must satisfy the systems of equations given in the text. Row reduce the
augmented matrix:
111 6 111 6 1116 1116
12415~0 13 9~0 139~0 139
13928 02822 0024 0012
ªºªºªºªº
«»«»«»«»
«»«»«»«»
«»«»«»«»
¬¼¬¼¬¼¬¼
1104 1001
~0 1 0 3~0 1 0 3
0012 0012
ªºªº
«»«»
«»«»
«»«»
¬¼¬¼
The polynomial is p(t) = 1 + 3t + 2t
2
.
34. [M] The system of equations to be solved is:
2345
01 2 3 4 5
2345
01 2 3 4 5
2345
01 2 3 4 5
2345
01 2 3 4 5
2345
01 2 3 4 5
23
01 2 3
00 0 0 00
22 2 2 22.90
4 4 4 4 4 14.8
6 6 6 6 6 39.6
8 8 8 8 8 74.3
10 10 10
aa a a a a
aa a a a a
aa a a a a
aa a a a a
aa a a a a
aa a a
+++++=
+++++=
+++++=
+++++=
+++++=
++++
45
45
10 10 119aa+=
The unknowns are a
0
, a
1
, …, a
5
. Use technology to compute the reduced echelon of the augmented
matrix:
23 4 5
10 0 0 0 0 0 10 0 0 0 0 0
12 4 8 16 32 2.9 02 4 8 16 32 2.9
1 4 16 64 256 1024 14.8 00 8 48 224 960 9
~
16362161296 777639.6 0 0 24 192 1248 7680 30.9
1 8 64 512 4096 32768 74.3 0 0 48 480 4032 32640 62.7
008096099209984010
11010 10 10 10 119
ªº
«»
«»
«»
«»
«»
«»
«»
«»
¬¼4.5
ª
º
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
¬
¼
100 0 0 0 0 100 0 0 0 0
024 8 16 32 2.9 024 8 16 32 2.9
008 48 224 960 9 00848 224 960 9
~~
0 0 0 48 576 4800 3.9 0 0 0 48 576 4800 3.9
000192268826880 8.7 000 0 384 7680 6.9
0 0 0 480 7680 90240 14.5 0 0 0 0 1920 42240 24.5
ªºªº
«»«»
«»«»
«»«»
«»«»
«»«»
«»«»
«»«
«»«
¬¼¬¼
»
»
14 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
100 0 0 0 0 100 0 0 0 0
024 8 16 32 2.9 024 8 16 32 2.9
00848224 960 9 00848224 960 9
~~
000485764800 3.9 000485764800 3.9
000 03847680 6.9 000 03847680 6.9
000 0 03840 10 000 0 0 1.0026
ªºª º
«»« »
«»« »
«»« »
«»« »
«»« »
«»« »
−−
«»« »
«»« »
¬¼¬ ¼
100 0 00 0 100000 0
024 8 160 2.8167 010000 1.7125
008482240 6.5000 001000 1.1948
~~~
000485760 8.6000 0 0 0 1 0 0 .6615
000 03840 26.900 000010 .0701
000 0 01.002604 000001 .0026
ªºªº
«»«»
«»«»
«»«»
«»«»
«»«»
«»«»
−−
«»«»
«»«»
¬¼¬¼
"
Thus p(t) = 1.7125t – 1.1948t
2
+ .6615t
3
– .0701t
4
+ .0026t
5
, and p(7.5) = 64.6 hundred lb.
Notes:
In Exercise 34, if the coefficients are retained to higher accuracy than shown here, then p(7.5) =
64.8. If a polynomial of lower degree is used, the resulting system of equations is overdetermined. The
augmented matrix for such a system is the same as the one used to find p, except that at least column 6 is
missing. When the augmented matrix is row reduced, the sixth row of the augmented matrix will be
entirely zero except for a nonzero entry in the augmented column, indicating that no solution exists.
Exercise 34 requires 25 row operations. It should give students an appreciation for higher-level
commands such as gauss and bgauss, discussed in Section 1.4 of the Study Guide. The command
ref (reduced echelon form) is available, but I recommend postponing that command until Chapter 2.
The Study Guide includes a “Mathematical Note” about the phrase, “If and only if,” used in Theorem
2.
1.3 SOLUTIONS
Notes:
The key exercises are 11–16, 19–22, 25, and 26. A discussion of Exercise 25 will help students
understand the notation [a
1
a
2
a
3
], {a
1
, a
2
, a
3
}, and Span{a
1
, a
2
, a
3
}.
1.
131(3) 4
212(1)1
−−+−−
ªºªºª ºªº
+= + = =
«»«»« »«»
+
¬¼¬¼¬ ¼¬¼
uv
.
Using the definitions carefully,
131(2)(3)165
2(2)
212(2)(1)224
−−+
ªº ªºªºª ºª ºªº
=+=+ = =
«» «»«»« »« »«»
−−+
¬¼ ¬¼¬¼¬ ¼¬ ¼¬¼
uv
, or, more quickly,
13165
22
21224
−−+
ªº ªºª ºªº
===
«» «»« »«»
+
¬¼ ¬¼¬ ¼¬¼
uv
. The intermediate step is often not written.
1.3 Solutions 15
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
2.
32325
21211
+
ªº ª º ª º ªº
+= + = =
«» « » « » «»
−−
¬¼ ¬ ¼ ¬ ¼ ¬¼
uv
.
Using the definitions carefully,
323(2)(2)341
2(2)
212(2)(1)224
−−
ªº ª º ªº ª º ª º ª º
=+=+ = =
«» « » «» « » « » « »
−−+
¬¼ ¬ ¼ ¬¼ ¬ ¼ ¬ ¼ ¬ ¼
uv
, or, more quickly,
32341
22
21224
−−
ªº ª º ª º ª º
===
«» « » « » « »
+
¬¼ ¬ ¼ ¬ ¼ ¬ ¼
uv
. The intermediate step is often not written.
3. 4.
5.
12
352
203
898
xx
ªº ªºªº
«» «»«»
+=
«» «»«»
«» «»«»
¬¼ ¬¼¬¼
,
12
1
12
35 2
203
89 8
xx
x
xx
ªºª ºªº
«»« »«»
+=
«»« »«»
«»« »«»
¬¼¬ ¼¬¼
,
12
1
12
35 2
23
89 8
xx
x
xx
+
ª
ºªº
«
»«»
=
«
»«»
«
»«»
¬
¼¬¼
12
1
12
35 2
23
89 8
xx
x
xx
+=
=
=
Usually the intermediate steps are not displayed.
6.
123
37 20
23 10
xxx
ªº ªº ªºªº
++ =
«» «» «»«»
¬¼ ¬¼ ¬¼¬¼
,
3
12
312
2
37 0
23 0
x
xx
xxx
ªºªºªº
ª
º
++ =
«»«»«»
«
»
¬
¼
¬¼¬¼¬¼ ,
123
123
37 2 0
23 0
xxx
xxx
+
ªº
ªº
=
«»
«»
++ ¬¼
¬¼
223
123
3720
23 0
xxx
xx x
+=
++=
Usually the intermediate steps are not displayed.
7. See the figure below. Since the grid can be extended in every direction, the figure suggests that every
vector in R
2
can be written as a linear combination of u and v.
To write a vector a as a linear combination of u and v, imagine walking from the origin to a along
the grid "streets" and keep track of how many "blocks" you travel in the u-direction and how many in
the v-direction.
a. To reach a from the origin, you might travel 1 unit in the u-direction and –2 units in the v-
direction (that is, 2 units in the negative v-direction). Hence a = u – 2v.
b. To reach b from the origin, travel 2 units in the u-direction and –2 units in the v-direction. So
b = 2u – 2v. Or, use the fact that b is 1 unit in the u-direction from a, so that
16 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b = a + u = (u – 2v) + u = 2u – 2v
c. The vector c is –1.5 units from b in the v-direction, so
c = b 1.5v = (2u – 2v) – 1.5v = 2u3.5v
d. The “map” suggests that you can reach d if you travel 3 units in the u-direction and –4 units in
the v-direction. If you prefer to stay on the paths displayed on the map, you might travel from the
origin to –3v, then move 3 units in the u-direction, and finally move –1 unit in the v-direction. So
d = –3v + 3uv = 3u – 4v
Another solution is
d = b – 2v + u = (2u – 2v) – 2v + u = 3u – 4v
Figure for Exercises 7 and 8
8. See the figure above. Since the grid can be extended in every direction, the figure suggests that every
vector in R
2
can be written as a linear combination of u and v.
w. To reach w from the origin, travel –1 units in the u-direction (that is, 1 unit in the negative
u-direction) and travel 2 units in the v-direction. Thus, w = (–1)u + 2v, or w = 2vu.
x. To reach x from the origin, travel 2 units in the v-direction and –2 units in the u-direction. Thus,
x = –2u + 2v. Or, use the fact that x is –1 units in the u-direction from w, so that
x = wu = (–u + 2v)u = –2u + 2v
y. The vector y is 1.5 units from x in the v-direction, so
y = x + 1.5v = (–2u + 2v) + 1.5v = –2u + 3.5v
z. The map suggests that you can reach z if you travel 4 units in the v-direction and –3 units in the
u-direction. So z = 4v – 3u = –3u + 4v. If you prefer to stay on the paths displayed on the “map,”
you might travel from the origin to –2u, then 4 units in the v-direction, and finally move –1 unit
in the u-direction. So
z = –2u + 4vu = –3u + 4v
9.
23
123
123
50
46 0
380
xx
xxx
xxx
+=
+=
+=
,
23
123
123
50
46 0
38 0
xx
xxx
xxx
+
ªº
ª
º
«»
«
»
+=
«»
«
»
«»
«
»
+
¬
¼
¬¼
23
12 3
12 3
050
46 0
380
xx
xx x
xx x
ªºªºª ºªº
«»«»« »«»
++=
«»«»« »«»
«»«»« »«»
−−
¬¼¬¼¬ ¼¬¼
,
123
01 50
46 10
13 80
xxx
ª
ºªºªºªº
«
»«»«»«»
++=
«
»«»«»«»
«
»«»«»«»
−−
¬
¼¬¼¬¼¬¼
Usually, the intermediate calculations are not displayed.
w
x
v
u
a
c
d
2v
b
z
y
–2vu
v
0
1.3 Solutions 17
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Note:
The Study Guide says, “Check with your instructor whether you need to “show work” on a
problem such as Exercise 9.”
10.
123
123
123
32 4 3
27 5 1
54 32
xx x
xx x
xx x
+=
−− +=
+=
,
123
123
123
32 4 3
27 5 1
543 2
xx x
xxx
xxx
+
ªº
ª
º
«»
«
»
−− +=
«»
«
»
«»
«
»
+
¬
¼¬¼
123
123
123
3243
27 51
5432
xxx
xxx
xxx
ªºªºª º ªº
«»
«»« » «»
++=
«»
«»« » «»
«»«»« » «»
¬¼¬ ¼ ¬¼¬¼
,
123
3243
2751
5432
xx x
ª
ºªºªºªº
«
»«»«»«»
++=
«
»«»«»«»
«
»«»«»«»
¬
¼¬¼¬¼¬¼
Usually, the intermediate calculations are not displayed.
11. The question
Is b a linear combination of a
1
, a
2
, and a
3
?
is equivalent to the question
Does the vector equation x
1
a
1
+ x
2
a
2
+ x
3
a
3
= b have a solution?
The equation
123
123
10 52
21 61
02 86
xxx
ªº ªº ªºªº
«» «» «»«»
++=
«» «» «»«»
«» «» «»«»
¬¼ ¬¼ ¬¼¬¼
↑↑↑
aa ab
(*)
has the same solution set as the linear system whose augmented matrix is
10 5 2
21 6 1
02 8 6
M
ªº
«»
=−−
«»
«»
¬¼
Row reduce M until the pivot positions are visible:
1052 1052
~0 1 4 3~0 1 4 3
0286 0000
M
ªºªº
«»«»
«»«»
«»«»
¬¼¬¼
The linear system corresponding to M has a solution, so the vector equation (*) has a solution, and
therefore b is a linear combination of a
1
, a
2
, and a
3
.
18 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
12. The equation
12 3
123
12611
0375
1259
xx x
−−
ªº ª º ª º ª º
«» « » « » « »
++=
«» « » « » « »
«» « » « » « »
¬¼ ¬ ¼ ¬ ¼ ¬ ¼
↑↑
aaab
(*)
has the same solution set as the linear system whose augmented matrix is
12611
0375
1259
M
−−
ªº
«»
=
«»
«»
¬¼
Row reduce M until the pivot positions are visible:
12611
~0 3 7 5
00112
M
−−
ªº
«»
«»
«»
¬¼
The linear system corresponding to M has a solution, so the vector equation (*) has a solution, and
therefore b is a linear combination of a
1
, a
2
, and a
3
.
13. Denote the columns of A by a
1
, a
2
, a
3
. To determine if b is a linear combination of these columns,
use the boxed fact in the subsection Linear Combinations. Row reduce the augmented matrix
[a
1
a
2
a
3
b] until you reach echelon form:
[a
1
a
2
a
3
b] =
142 3 142 3
0357~0357
2843 0003
−−
ª
ºª º
«
»« »
−−
«
»« »
«
»« »
−−
¬
¼¬ ¼
The system for this augmented matrix is inconsistent, so b is not a linear combination of the columns
of A.
14. Row reduce the augmented matrix [a
1
a
2
a
3
b] until you reach echelon form:
[a
1
a
2
a
3
b] =
10 5 2 1052 1052
21 6 1~0143~0143
02 8 6 0286 0000
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
¬¼¬¼¬¼
.
The linear system corresponding to this matrix has a solution, so b is a linear combination of the
columns of A.
15. [a
1
a
2
b] =
153 15 3 15 3 15 3
385~07 14~01 2~01 2
12 0 3 3 0 3 3 00 3hh h h
−− − −
ªºª ºª ºª º
«»« »« »« »
−− −
«»« »« »« »
«»« »« »« »
−−++
¬¼¬ ¼¬ ¼¬ ¼
. The vector b
is in Span{a
1
, a
2
} when h 3 is zero, that is, when h = 3.
1.3 Solutions 19
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
16. [v
1
v
2
y] =
12 12 12
013~01 3~01 3
275 0352 0042
hh h
hh
−− −
ªºª ºª º
«»« »« »
−− −
«»« »« »
«»« »« »
−− −++
¬¼¬ ¼¬ ¼
. The vector y is in
Span{v
1
, v
2
} when 4 + 2h is zero, that is, when h = –2.
17. Noninteger weights are acceptable, of course, but some simple choices are 0 v
1
+ 0 v
2
= 0, and
1
v
1
+ 0 v
2
=
3
1
2
ªº
«»
«»
«»
¬¼
, 0 v
1
+ 1 v
2
=
4
0
1
ª
º
«
»
«
»
«
»
¬
¼
, 1 v
1
+ 1 v
2
=
1
1
3
ª
º
«
»
«
»
«
»
¬
¼
, 1 v
1
– 1 v
2
=
7
1
1
ªº
«»
«»
«»
¬¼
18. Some likely choices are 0 v
1
+ 0 v
2
= 0, and
1
v
1
+ 0 v
2
=
1
1
2
ªº
«»
«»
«»
¬¼
, 0 v
1
+ 1 v
2
=
2
3
0
ªº
«»
«»
«»
¬¼
, 1 v
1
+ 1 v
2
=
1
4
2
ª
º
«
»
«
»
«
»
¬
¼
, 1 v
1
– 1 v
2
=
3
2
2
ªº
«»
«»
«»
¬¼
19. By inspection, v
2
= (3/2)v
1
. Any linear combination of v
1
and v
2
is actually just a multiple of v
1
. For
instance,
av
1
+ bv
2
= av
1
+ b(3/2)v
1
= (a + 3b/2)v
1
So Span{v
1
, v
2
} is the set of points on the line through v
1
and 0.
Note:
Exercises 19 and 20 prepare the way for ideas in Sections 1.4 and 1.7.
20. Span{v
1
, v
2
} is a plane in R
3
through the origin, because neither vector in this problem is a multiple
of the other.
21. Let y =
h
k
ªº
«»
¬¼
. Then [u v y] =
22 22
~
11 02 /2
hh
kkh
ªºª º
«»« »
+
¬¼¬ ¼
. This augmented matrix
corresponds to a consistent system for all h and k. So y is in Span{u, v} for all h and k.
22. Construct any 3×4 matrix in echelon form that corresponds to an inconsistent system. Perform
sufficient row operations on the matrix to eliminate all zero entries in the first three columns.
23. a. False. The alternative notation for a (column) vector is (–4, 3), using parentheses and commas.
b. False. Plot the points to verify this. Or, see the statement preceding Example 3. If
5
2
ªº
«»
¬¼
were on
the line through
2
5
ªº
«»
¬¼
and the origin, then
5
2
ª
º
«
»
¬
¼
would have to be a multiple of
2
5
ªº
«»
¬¼
, which is
not the case.
c. True. See the line displayed just before Example 4.
d. True. See the box that discusses the matrix in (5).
20 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
e. False. The statement is often true, but Span{u, v} is not a plane when v is a multiple of u, or
when u is the zero vector.
24. a. False. Span{u, v} can be a plane.
b. True. See the beginning of the subsection Vectors in R
n
.
c. True. See the comment following the definition of Span{v
1
, …, v
p
}.
d. False. (u v) + v = u v + v = u.
e. False. Setting all the weights equal to zero results in a legitimate linear combination of a set of
vectors.
25. a. There are only three vectors in the set {a
1
, a
2
, a
3
}, and b is not one of them.
b. There are infinitely many vectors in W = Span{a
1
, a
2
, a
3
}. To determine if b is in W, use the
method of Exercise 13.
[a
1
a
2
a
3
b] =
10 4 4 10 44 10 44
03 2 1~03 21~03 21
26 3 4 06 54 00 12
−−
ª
ºª ºª º
«
»« »« »
−−
«
»« »« »
«
»« »« »
−− −
¬
¼¬ ¼¬ ¼
The system for this augmented matrix is consistent, so b is in W.
c. a
1
= 1a
1
+ 0a
2
+ 0a
3
. See the discussion in the text following the definition of Span{v
1
, …, v
p
}.
26. a. [a
1
a
2
a
3
b] =
20610 1035 1035 1035
1853~1853~0888~0888
1217 1217 0222 0004
−−
−−
ª
ºª ºª ºª º
«
»« »« »« »
«
»« »« »« »
«
»« »« »« »
¬
¼¬ ¼¬ ¼¬ ¼
No, b is not a linear combination of the columns of A, that is, b is not in W.
b. The second column of A is in W because a
2
= 0 a
1
+ 1 a
2
+ 0 a
3
.
27. a. 5v
1
is the output of 5 days’ operation of mine #1.
b. The total output is x
1
v
1
+ x
2
v
2
, so x
1
and x
2
should satisfy
11 2 2
240
2824
xx
ª
º
+=
«
»
¬
¼
vv
.
c. [M] Reduce the augmented matrix
30 40 240 1 0 1.73
~
600 380 2824 0 1 4.70
ª
ºª º
«
»« »
¬
¼¬ ¼
.
Operate mine #1 for 1.73 days and mine #2 for 4.70 days. (This is an approximate solution.)
28. a. The amount of heat produced when the steam plant burns x
1
tons of anthracite and x
2
tons of
bituminous coal is 27.6x
1
+ 30.2x
2
million Btu.
b. The total output produced by x
1
tons of anthracite and x
2
tons of bituminous coal is given by the
vector
12
27.6 30.2
3100 6400
250 360
xx
ªºªº
«»«»
+
«»«»
«»«»
¬¼¬¼
.
c. [M] The appropriate values for x
1
and x
2
satisfy
12
27.6 30.2 162
3100 6400 23,610
250 360 1,623
xx
ª
ºªºª º
«
»«»« »
+=
«
»«»« »
«
»«»« »
¬
¼¬¼¬ ¼
.
To solve, row reduce the augmented matrix:
1.3 Solutions 21
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
27.6 30.2 162 1.000 0 3.900
3100 6400 23610 ~ 0 1.000 1.800
250 360 1623 0 0 0
ªºªº
«»«»
«»«»
«»«»
¬¼¬¼
The steam plant burned 3.9 tons of anthracite coal and 1.8 tons of bituminous coal.
29. The total mass is 4 + 2 + 3 + 5 = 14. So v = (4v
1
+2v
2
+ 3v
3
+ 5v
4
)/14. That is,
2441 8812517141.214
11
42 22 30 56 84030 177 2.429
14 14
4320 16660871.143
§·−−++
ªº ªº ªº ªº ª ºª ºª º
¨¸
«» «» «» «» « »« »« »
=+++=++=−≈
¨¸
«» «» «» «» « »« »« »
¨¸
«» «» «» «» « »« »« »
++
¬¼ ¬¼ ¬¼ ¬¼ ¬ ¼¬ ¼¬ ¼
©¹
v
30. Let m be the total mass of the system. By definition,
1
11 1
1()
k
kk k
m
m
mm
mmm
=++=++vv vv v""
The second expression displays v as a linear combination of v
1
, …, v
k
, which shows that v is in
Span{v
1
, …, v
k
}.
31. a. The center of mass is
08210/3
1111
114 2
3
§·
ªº ªº ªº ª º
++=
¨¸
«» «» «» « »
¬¼ ¬¼ ¬¼ ¬ ¼
©¹
.
b. The total mass of the new system is 9 grams. The three masses added, w
1
, w
2
, and w
3
, satisfy the
equation
() ( ) ( )
123
0822
1111
1142
9www
§·
ª
ºªºªºªº
+++++=
¨¸
«
»«»«»«»
¬
¼¬¼¬¼¬¼
©¹
which can be rearranged to
() ( )
()
123
08218
111
11418
www
ªº ªº ªº ª º
+++++=
«» «» «» « »
¬¼ ¬¼ ¬¼ ¬ ¼
and
123
0828
11412
www
ªº ªº ªº ª º
++=
«» «» «» « »
¬¼ ¬¼ ¬¼ ¬ ¼
The condition w
1
+ w
2
+ w
3
= 6 and the vector equation above combine to produce a system of
three equations whose augmented matrix is shown below, along with a sequence of row
operations:
111 6 1116 1116
082 8~0828~0828
11412 00 36 00 12
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
1104 1003.5 1003.5
~0 8 0 4~0 8 0 4~0 1 0 .5
0012 001 2 001 2
ªºª ºª º
«»« »« »
«»« »« »
«»« »« »
¬¼¬ ¼¬ ¼
Answer: Add 3.5 g at (0, 1), add .5 g at (8, 1), and add 2 g at (2, 4).
22 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Extra problem: Ignore the mass of the plate, and distribute 6 gm at the three vertices to make the center
of mass at (2, 2). Answer: Place 3 g at (0, 1), 1 g at (8, 1), and 2 g at (2, 4).
32. See the parallelograms drawn on the figure from the text that accompanies this exercise. Here c
1
, c
2
,
c
3
, and c
4
are suitable scalars. The darker parallelogram shows that b is a linear combination of v
1
and v
2
, that is
c
1
v
1
+ c
2
v
2
+ 0 v
3
= b
The larger parallelogram shows that b is a linear combination of v
1
and v
3
, that is,
c
4
v
1
+ 0 v
2
+ c
3
v
3
= b
So the equation x
1
v
1
+ x
2
v
2
+ x
3
v
3
= b has at least two solutions, not just one solution. (In fact, the
equation has infinitely many solutions.)
33. a. For j = 1,…, n, the jth entry of (u + v) + w is (u
j
+ v
j
) + w
j
. By associativity of addition in R, this
entry equals u
j
+ (v
j
+ w
j
), which is the jth entry of u + (v + w). By definition of equality of
vectors, (u + v) + w = u + (v + w).
b. For any scalar c, the jth entry of c(u + v) is c(u
j
+ v
j
), and the jth entry of cu + cv is cu
j
+ cv
j
(by
definition of scalar multiplication and vector addition). These entries are equal, by a distributive
law in R. So c(u + v) = cu + cv.
34. a. For j = 1,…, n, u
j
+ (–1)u
j
= (–1)u
j
+ u
j
= 0, by properties of R. By vector equality,
u + (–1)u = (–1)u + u = 0.
b. For scalars c and d, the jth entries of c(du) and (cd )u are c(du
j
) and (cd )u
j
, respectively. These
entries in R are equal, so the vectors c(du) and (cd)u are equal.
Note:
When an exercise in this section involves a vector equation, the corresponding technology data (in
the data files on the web) is usually presented as a set of (column) vectors. To use MATLAB or other
technology, a student must first construct an augmented matrix from these vectors. The MATLAB note in
the Study Guide describes how to do this. The appendices in the Study Guide give corresponding
information about Maple, Mathematica, and the TI calculators.
c
2
v
2
c
3
v
3
0
v
3
c
4
v
1
c
1
v
1
v
1
v
2
b
1.4 Solutions 23
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1.4 SOLUTIONS
Notes:
Key exercises are 1–20, 27, 28, 31 and 32. Exercises 29, 30, 33, and 34 are harder. Exercise 34
anticipates the Invertible Matrix Theorem but is not used in the proof of that theorem.
1. The matrix-vector Ax is not defined because the number of columns (2) in the 3×2 matrix A does not
match the number of entries (3) in the vector x.
2. The matrix-vector Ax is not defined because the number of columns (1) in the 3×1 matrix A does not
match the number of entries (2) in the vector x.
3.
12 12 264
2
31 (2)3 31 6 3 9
3
16 16 21816
A
ªº ªºªºªºªºªº
ªº
«» «»«»«»«»«»
==−−+=+=
«»
«» «»«»«»«»«»
¬¼
«» «»«»«»«»«»
¬¼ ¬¼¬¼¬¼¬¼¬¼
x
, and
12 1(2)23 4
2
31 (3)(2)13 9
3
16 1(2)63 16
A
⋅− +
ªº ª ºªº
ªº
«» « »«»
==−⋅+=
«»
«» « »«»
¬¼
«» « »«»
⋅− +
¬¼ ¬ ¼¬¼
x
4.
1
13 4 1 3 4 164 3
21 2 1
32 1 3 2 1 341 8
1
A
ªº
−−+
ªºªºªºªºªºªº
«»
==++==
«»«»«»«»«»«»
«» ++
¬¼¬¼¬¼¬¼¬¼¬¼
«»
¬¼
x
, and
1
13 4 1132(4)1 3
2
32 1 312211 8
1
A
ªº
−⋅++−⋅
ªºª ºªº
«»
== =
«»« »«»
«» ++
¬¼¬ ¼¬¼
«»
¬¼
x
5. On the left side of the matrix equation, use the entries in the vector x as the weights in a linear
combination of the columns of the matrix A:
12 314
2111
23111
−−
ªº ªº ªº ªºªº
⋅−+⋅−=
«» «» «» «»«»
−− −
¬¼ ¬¼ ¬¼ ¬¼¬¼
6. On the left side of the matrix equation, use the entries in the vector x as the weights in a linear
combination of the columns of the matrix A:
2321
321
35
8549
2111
−−
ªº ªºª º
«» «»« »
«» «»« »
−⋅ +=
«» «»« »
−−
«» «»« »
¬¼ ¬¼¬ ¼
7. The left side of the equation is a linear combination of three vectors. Write the matrix A whose
columns are those three vectors, and create a variable vector x with three entries:
24 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
457 457
138 138
750 750
412 412
A
ªº−−
ªºªºªº ª º
«»
«»«»«» « »
−−
«»
«»«»«» « »
==
«»
«»«»«» « »
−−
«»
«»«»«» « »
−−
«»
¬¼¬¼¬¼ ¬ ¼
¬¼
and
1
2
3
x
x
x
ª
º
«
»
=
«
»
«
»
¬
¼
x
. Thus the equation Ax = b is
1
2
3
457 6
138 8
750 0
412 7
x
x
x
ªºªº
ªº
«»«»
−− −
«»
«»«»
=
«»
«»«»
«»
«»«»
¬¼
−−
¬¼¬¼
For your information: The unique solution of this equation is (5, 7, 3). Finding the solution by hand
would be time-consuming.
Note:
The skill of writing a vector equation as a matrix equation will be important for both theory and
application throughout the text. See also Exercises 27 and 28.
8. The left side of the equation is a linear combination of four vectors. Write the matrix A whose
columns are those four vectors, and create a variable vector with four entries:
2140 2140
4532 4532
Aªº−− −
ªºªºªºªº ª º
==
«»
«»«»«»«» « »
−−
¬¼¬¼¬¼¬¼ ¬ ¼
¬¼
, and
1
2
3
4
z
z
z
z
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
z
. Then the equation Az = b
is
1
2
3
4
2140 5
4532 12
z
z
z
z
ªº
«»
−−
ªºªº
«»
=
«»«»
«»
¬¼¬¼
«»
¬¼
.
For your information: One solution is (8, 7, 1, 3). The general solution is z
1
= 37/6 + (17/6)z
3
(1/3)z
4
, z
2
= 22/3 +(5/3)z
3
– (2/3)z
4
, with z
3
and z
4
free.
9. The system has the same solution set as the vector equation
123
51 38
02 40
xx x
ªº ªº ª º ªº
++ =
«» «» « » «»
¬¼ ¬¼ ¬ ¼ ¬¼
and this equation has the same solution set as the matrix equation
1
2
3
51 3 8
02 4 0
x
x
x
ªº
ªºªº
«»
=
«»«»
«»
¬¼¬¼
«»
¬¼
10. The system has the same solution set as the vector equation
12
418
532
311
xx
ªº ª º ªº
«» « » «»
+=
«» « » «»
«» « » «»
¬¼ ¬ ¼ ¬¼
and this equation has the same solution set as the matrix equation
1.4 Solutions 25
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1
2
41 8
53 2
31 1
x
x
ªº ªº
ªº
«» «»
=
«»
«» «»
¬¼
«» «»
¬¼ ¬¼
11. To solve Ax = b, row reduce the augmented matrix [a
1
a
2
a
3
b] for the corresponding linear
system:
1342 1342 1342 1302 10011
1524~0266~0133~0103~010 3
37612 0266 0010 0010 001 0
−− −− −
ªºªºªºªºªº
«»«»«»«»«»
«»«»«»«»«»
«»«»«»«»«»
−− −
¬¼¬¼¬¼¬¼¬¼
The solution is
1
2
3
11
3
0
x
x
x
=
°=
®
°=
¯
. As a vector, the solution is x =
1
2
3
11
3
0
x
x
x
ªº
ª
º
«»
«
»
=
«»
«
»
«»
«
»
¬
¼¬¼
.
12. To solve Ax = b, row reduce the augmented matrix [a
1
a
2
a
3
b] for the corresponding linear
system:
1211 1211 1211 1211
3422~0215~0888~0111
5233 0888 0215 0215
−−
ªºªºªºªº
«»«»«»«»
−− − −
«»«»«»«»
«»«»«»«»
−−− −
¬¼¬¼¬¼¬¼
12 11 1204 100 4
~0 1 1 1~0 1 0 4~0 1 0 4
00 13 0013 001 3
−−
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
The solution is
1
2
3
4
4
3
x
x
x
=
°=
®
°=
¯
. As a vector, the solution is x =
1
2
3
4
4
3
x
x
x
ªº
ª
º
«»
«
»
=
«»
«
»
«»
«
»
¬
¼¬¼
.
13. The vector u is in the plane spanned by the columns of A if and only if u is a linear combination of
the columns of A. This happens if and only if the equation Ax = u has a solution. (See the box
preceding Example 3 in Section 1.4.) To study this equation, reduce the augmented matrix [A u]
350 114 11 4 114
264~264~0812~0812
114 350 0812 000
ªºªºª ºªº
«»«»« »«»
−−
«»«»« »«»
«»«»« »«»
−−
¬¼¬¼¬ ¼¬¼
The equation Ax = u has a solution, so u is in the plane spanned by the columns of A.
For your information: The unique solution of Ax = u is (5/2, 3/2).
14. Reduce the augmented matrix [A u] to echelon form:
25 1 4 12 0 4 12 0 4 12 0 4
01 1 1~01 1 1~01 1 1~01 1 1
12 0 4 25 1 4 0 1 1 4 00 0 3
ªºªºªºªº
«»«»«»«»
−− −− −− −
«»«»«»«»
«»«»«»«»
−−
¬¼¬¼¬¼¬¼
26 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
The equation Ax = u has no solution, so u is not in the subset spanned by the columns of A.
15. The augmented matrix for Ax = b is
1
2
31
93
b
b
ªº
«»
¬¼
, which is row equivalent to
1
21
31
00 3
b
bb
ªº
«»
+
¬¼
.
This shows that the equation Ax = b is not consistent when 3b
1
+ b
2
is nonzero. The set of b for
which the equation is consistent is a line through the origin–the set of all points (b
1
, b
2
) satisfying b
2
= –3b
1
.
16. Row reduce the augmented matrix [A b]:
1
2
3
121
220, .
413
b
Ab
b
−−
ª
ºªº
«
»
«»
==
«
»
«»
«
»«»
¬¼
¬
¼
b
11
221
331
121 12 1
220 ~022 2
413 077 4
bb
bbb
bbb
−− −
ªºªº
«»«»
−−+
«»«»
«»«»
−−
¬¼¬¼
11
21 21
31 21 1 23
12 1 12 1
~0 2 2 2 0 2 2 2
000 4(72)( 2) 0003(72)
bb
bb bb
bb bb b bb
−− −−
ªºªº
«»«»
−− +=−− +
«»«»
«»«»
++ ++
¬¼¬¼
The equation Ax = b is consistent if and only if 3b
1
+ (7/2)b
2
+ b
3
= 0, or 6b
1
+ 7b
2
+ 2b
3
= 0. The set
of such b is a plane through the origin in R
3
.
17. Row reduction shows that only three rows of A contain a pivot position:
1303 1303 1303 1303
11110214 0214 0214
~~~
0428 0428 0000 0005
2031 0637 0005 0000
A
ªºªºªºªº
«»«»«»«»
−− − −
«»«»«»«»
=«»«»«»«»
−− −
«»«»«»«»
−−
¬¼¬¼¬¼¬¼
Because not every row of A contains a pivot position, Theorem 4 in Section 1.4 shows that the
equation Ax = b does not have a solution for each b in R
4
.
18. Row reduction shows that only three rows of B contain a pivot position:
14 1 2 14 1 2 14 1 2 1 4 1 2
013 4 013 4 013 4 0 13 4
~~~
026 7 026 7 00015 0 0015
295 7 013 11 000 7 0 00 0
B
ªºª ºªºª º
«»« »«»« »
−−
«»« »«»« »
=«»« »«»« »
«»« »«»« »
−− −
¬¼¬ ¼¬¼¬ ¼
Because not every row of B contains a pivot position, Theorem 4 in Section 1.4 shows that not all
vectors in R
4
can be written as a linear combination of the columns of B. The columns of B certainly
do not span R
3
, because each column of B is in R
4
, not R
3
. (This question was asked to alert students
to a fairly common misconception among students who are just learning about spanning.)
19. The work in Exercise 17 shows that statement (d) in Theorem 4 is false. So all four statements in
Theorem 4 are false. Thus, not all vectors in R
4
can be written as a linear combination of the columns
of A. Also, the columns of A do not span R
4
.
1.4 Solutions 27
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
20. The work in Exercise 18 shows that statement (d) in Theorem 4 is false. So all four statements in
Theorem 4 are false. Thus, the equation Bx = y does not have a solution for each y in R
4
, and the
columns of B do not span R
4
.
21. Row reduce the matrix [v
1
v
2
v
3
] to determine whether it has a pivot in each row.
10 1 10 1 10 1 101
010 010 010 010
~~~.
100 00 1 00 1 001
011011001000
ªºªºªºªº
«»«»«»«»
−−−
«»«»«»«»
«»«»«»«»
«»«»«»«»
−−−
¬¼¬¼¬¼¬¼
The matrix [v
1
v
2
v
3
] does not have a pivot in each row, so the columns of the matrix do not span
R
4
, by Theorem 4. That is, {v
1
, v
2
, v
3
} does not span R
4
.
Note:
Some students may realize that row operations are not needed, and thereby discover the principle
covered in Exercises 31 and 32.
22. Row reduce the matrix [v
1
v
2
v
3
] to determine whether it has a pivot in each row.
004 396
032~032
396 004
−−
ªºªº
«»«»
−− −−
«»«»
«»«»
−−
¬¼¬¼
The matrix [v
1
v
2
v
3
] has a pivot in each row, so the columns of the matrix span R
3
, by Theorem 4.
That is, {v
1
, v
2
, v
3
} spans R
3
.
23. a. False. See the paragraph following equation (3). The text calls Ax = b a matrix equation.
b. True. See the box before Example 3.
c. False. See the warning following Theorem 4.
d. True. See Example 4.
e. True. See parts (c) and (a) in Theorem 4.
f. True. In Theorem 4, statement (a) is false if and only if statement (d) is also false.
24. a. True. This statement is in Theorem 3. However, the statement is true without any "proof"
because, by definition, Ax is simply a notation for x
1
a
1
+ + x
n
a
n
, where a
1
, …, a
n
are the
columns of A.
b. True. See the box before Example 3.
c. True. See Example 2.
d. False. In Theorem 4, statement (d) is true if and only if statement (a) is true.
e. True. See Theorem 3.
f. False. In Theorem 4, statement (c) is false if and only if statement (a) is also false.
25. By definition, the matrix-vector product on the left is a linear combination of the columns of the
matrix, in this case using weights –3, –1, and 2. So c
1
= –3, c
2
= –1, and c
3
= 2.
26. The equation in x
1
and x
2
involves the vectors u, v, and w, and it may be viewed as
[]
1
2
.
x
x
ªº
=
«»
¬¼
uv w
By definition of a matrix-vector product, x
1
u + x
2
v = w. The stated fact that
2u – 3vw = 0 can be rewritten as 2u – 3v = w. So, a solution is x
1
= 2, x
2
= –3.
28 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
27. The matrix equation can be written as c
1
v
1
+ c
2
v
2
+ c
3
v
3
+ c
4
v
4
+ c
5
v
5
= v
6
, where
c
1
= –3, c
2
= 1, c
3
= 2, c
4
= –1, c
5
= 2, and
123 45 6
35 4 9 7 11
,, , , ,
58 1 2 4 11
−−
ªº ªº ªº ªº ªº ª º
======
«» «» «» «» «» « »
−−−
¬¼ ¬¼ ¬¼ ¬¼ ¬¼ ¬ ¼
vvv v v v
28. Place the vectors q
1
, q
2
, and q
3
into the columns of a matrix, say, Q and place the weights x
1
, x
2
, and
x
3
into a vector, say, x. Then the vector equation becomes
Qx = v, where Q = [q
1
q
2
q
3
] and
1
2
3
x
x
x
ª
º
«
»
=
«
»
«
»
¬
¼
x
Note: If your answer is the equation Ax = b, you need to specify what A and b are.
29. Start with any 3×3 matrix B in echelon form that has three pivot positions. Perform a row operation
(a row interchange or a row replacement) that creates a matrix A that is not in echelon form. Then A
has the desired property. The justification is given by row reducing A to B, in order to display the
pivot positions. Since A has a pivot position in every row, the columns of A span R
3
, by Theorem 4.
30. Start with any nonzero 3×3 matrix B in echelon form that has fewer than three pivot positions.
Perform a row operation that creates a matrix A that is not in echelon form. Then A has the desired
property. Since A does not have a pivot position in every row, the columns of A do not span R
3
, by
Theorem 4.
31. A 3×2 matrix has three rows and two columns. With only two columns, A can have at most two pivot
columns, and so A has at most two pivot positions, which is not enough to fill all three rows. By
Theorem 4, the equation Ax = b cannot be consistent for all b in R
3
. Generally, if A is an m×n matrix
with m > n, then A can have at most n pivot positions, which is not enough to fill all m rows. Thus,
the equation Ax = b cannot be consistent for all b in R
3
.
32. A set of three vectors in R
4
cannot span R
4
. Reason: the matrix A whose columns are these three
vectors has four rows. To have a pivot in each row, A would have to have at least four columns (one
for each pivot), which is not the case. Since A does not have a pivot in every row, its columns do not
span R
4
, by Theorem 4. In general, a set of n vectors in R
m
cannot span R
m
when n is less than m.
33. If the equation Ax = b has a unique solution, then the associated system of equations does not have
any free variables. If every variable is a basic variable, then each column of A is a pivot column. So
the reduced echelon form of A must be
100
010
001
000
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
.
Note:
Exercises 33 and 36 are difficult in the context of this section because the focus in Section 1.4 is on
existence of solutions, not uniqueness. However, these exercises serve to review ideas from Section 1.2,
and they anticipate ideas that will come later.
34. Given Au
1
= v
1
and Au
2
= v
2
, you are asked to show that the equation Ax = w has a solution, where
w = v
1
+ v
2
. Observe that w = Au
1
+ Au
2
and use Theorem 5(a) with u
1
and u
2
in place of u and v,
respectively. That is, w = Au
1
+ Au
2
= A(u
1
+ u
2
). So the vector x = u
1
+ u
2
is a solution of w = Ax.
1.4 Solutions 29
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
35. Suppose that y and z satisfy Ay = z. Then 5z = 5Ay. By Theorem 5(b), 5Ay = A(5y). So 5z = A(5y),
which shows that 5y is a solution of Ax = 5z. Thus, the equation Ax = 5z is consistent.
36. If the equation Ax = b has a unique solution, then the associated system of equations does not have
any free variables. If every variable is a basic variable, then each column of A is a pivot column. So
the reduced echelon form of A must be
1000
0100
0010
0001
ª
º
«
»
«
»
«
»
«
»
¬
¼
. Now it is clear that A has a pivot position in
each row. By Theorem 4, the columns of A span R
4
.
37. [M]
7258 7 2 5 8 7 2 5 8
5349 011/73/723/7 011/7 3/7 23/7
~~
610 2 7 0 58/716/7 1/7 0 050/11 189/11
79215 0 11 3 23 0 0 0 0
−− −
ªºª ºª º
«»« »« »
−− −
«»« »« »
«»« »« »
−−
«»« »« »
−−
«»« »« »
¬¼¬ ¼¬ ¼
or, approximately
7258
0 1.57 .429 3.29
0 0 4.55 17.2
0000
ªº
«»
−−
«»
«»
«»
«»
¬¼
, to three significant figures. The original matrix does
not have a pivot in every row, so its columns do not span R
4
, by Theorem 4.
38. [M]
4518 4 5 1 8 4 5 1 8
3742 013/413/4 4 013/413/4 4
~~
5614 0 1/4 1/4 6 0 0 082/13
91107 049/449/4110 0 0 0
−− − − − −
ªºª ºª º
«»« »« »
−− − − − −
«»« »« »
«»« »« »
−− − −
«»« »« »
¬¼¬ ¼¬ ¼
With pivots only in the first three rows, the original matrix has columns that do not span R
4
, by
Theorem 4.
39. [M]
10 7 1 4 6 10 7 1 4 6
846103 0 8/5 26/534/5 9/5
~
711 5 1 8 061/10 43/10 9/5 19/5
31101212 011/1097/1054/551/5
−−
ªºª º
«»« »
−−− −
«»« »
«»« »
−− − −
«»« »
¬¼¬ ¼
10 7 1 4 6 10 7 1 4 6
08/5 26/5 34/5 9/5 08/5 26/5 34/5 9/5
~~
00193/8193/849/1600193/8193/849/16
0049/849/8183/1600 0 04715/386
−−
ªºª º
«»« »
−− − −− −
«»« »
«»« »
−− −−
«»« »
¬¼¬ ¼
The original matrix has a pivot in every row, so its columns span R
4
, by Theorem 4.
40. [M]
511 6 712 5 11 6 7 12
7346 9 062/5 62/519/539/5
~
11 5 6 9 3 0 96 / 5 96 / 5 32 / 5 147 / 5
34 72 7 053/5 53/5 11/5 71/5
−− −−
ªºª º
«»« »
−−− − −−
«»« »
«»« »
−− −
«»« »
−− −−
¬¼¬ ¼
30 CHAPTER 1 Linear Equations in Linear Algebra
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
511 6 7 12 511 6 7 12
062/5 62/5 19/5 39/5 062/5 62/5 19/5 39/5
~~
00 0 16/31537/3100 0 16/31537/31
00 0 65/62467/62 00 0 0 1367/62
−− −−
ªºªº
«»«»
−− −−
«»«»
«»«»
−−
«»«»
¬¼¬¼
The original matrix has a pivot in every row, so its columns span R
4
, by Theorem 4.
41. [M] Examine the calculations in Exercise 39. Notice that the fourth column of the original matrix,
say A, is not a pivot column. Let A
o
be the matrix formed by deleting column 4 of A, let B be the
echelon form obtained from A, and let B
o
be the matrix obtained by deleting column 4 of B. The
sequence of row operations that reduces A to B also reduces A
o
to B
o
. Since B
o
is in echelon form, it
shows that A
o
has a pivot position in each row. Therefore, the columns of A
o
span R
4
.
It is possible to delete column 3 of A instead of column 4. In this case, the fourth column of A
becomes a pivot column of A
o
, as you can see by looking at what happens when column 3 of B is
deleted. For later work, it is desirable to delete a nonpivot column.
Note:
Exercises 41 and 42 help to prepare for later work on the column space of a matrix. (See Section
2.9 or 4.6.) The Study Guide points out that these exercises depend on the following idea, not explicitly
mentioned in the text: when a row operation is performed on a matrix A, the calculations for each new
entry depend only on the other entries in the same column. If a column of A is removed, forming a new
matrix, the absence of this column has no affect on any row-operation calculations for entries in the other
columns of A. (The absence of a column might affect the particular choice of row operations performed
for some purpose, but that is not being considered here.)
42. [M] Examine the calculations in Exercise 40. The third column of the original matrix, say A, is not a
pivot column. Let A
o
be the matrix formed by deleting column 3 of A, let B be the echelon form
obtained from A, and let B
o
be the matrix obtained by deleting column 3 of B. The sequence of row
operations that reduces A to B also reduces A
o
to B
o
. Since B
o
is in echelon form, it shows that A
o
has
a pivot position in each row. Therefore, the columns of A
o
span R
4
.
It is possible to delete column 2 of A instead of column 3. (See the remark for Exercise 41.)
However, only one column can be deleted. If two or more columns were deleted from A, the
resulting matrix would have fewer than four columns, so it would have fewer than four pivot
positions. In such a case, not every row could contain a pivot position, and the columns of the matrix
would not span R
4
, by Theorem 4.
Notes:
At the end of Section 1.4, the Study Guide gives students a method for learning and mastering
linear algebra concepts. Specific directions are given for constructing a review sheet that connects the
basic definition of “span” with related ideas: equivalent descriptions, theorems, geometric interpretations,
special cases, algorithms, and typical computations. I require my students to prepare such a sheet that
reflects their choices of material connected with “span”, and I make comments on their sheets to help
them refine their review. Later, the students use these sheets when studying for exams.
The MATLAB box for Section 1.4 introduces two useful commands gauss and bgauss that
allow a student to speed up row reduction while still visualizing all the steps involved. The command
B = gauss(A,1) causes MATLAB to find the left-most nonzero entry in row 1 of matrix A, and use
that entry as a pivot to create zeros in the entries below, using row replacement operations. The result is a
matrix that a student might write next to A as the first stage of row reduction, since there is no need to
write a new matrix after each separate row replacement. I use the gauss command frequently in lectures
to obtain an echelon form that provides data for solving various problems. For instance, if a matrix has 5
rows, and if row swaps are not needed, the following commands produce an echelon form of A:
B = gauss(A,1), B = gauss(B,2), B = gauss(B,3), B = gauss(B,4)
1.5 Solutions 31
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
If an interchange is required, I can insert a command such as B = swap(B,2,5) . The command
bgauss uses the left-most nonzero entry in a row to produce zeros above that entry. This command,
together with scale, can change an echelon form into reduced echelon form.
The use of gauss and bgauss creates an environment in which students use their computer
program the same way they work a problem by hand on an exam. Unless you are able to conduct your
exams in a computer laboratory, it may be unwise to give students too early the power to obtain reduced
echelon forms with one command—they may have difficulty performing row reduction by hand during an
exam. Instructors whose students use a graphic calculator in class each day do not face this problem. In
such a case, you may wish to introduce rref earlier in the course than Chapter 4 (or Section 2.8), which
is where I finally allow students to use that command.
1.5 SOLUTIONS
Notes
: The geometry helps students understand Span{u, v}, in preparation for later discussions of sub-
spaces. The parametric vector form of a solution set will be used throughout the text. Figure 6 will appear
again in Sections 2.9 and 4.8.
For solving homogeneous systems, the text recommends working with the augmented matrix, al-
though no calculations take place in the augmented column. See the Study Guide comments on Exercise 7
that illustrate two common student errors.
All students need the practice of Exercises 1–14. (Assign all odd, all even, or a mixture. If you do not
assign Exercise 7, be sure to assign both 8 and 10.) Otherwise, a few students may be unable later to find
a basis for a null space or an eigenspace. Exercises 28–36 are important. Exercises 35 and 36 help
students later understand how solutions of Ax = 0 encode linear dependence relations among the columns
of A. Exercises 37–40 are more challenging. Exercise 37 will help students avoid the standard mistake of
forgetting that Theorem 6 applies only to a consistent equation Ax = b.
1. Reduce the augmented matrix to echelon form and circle the pivot positions. If a column of the
coefficient matrix is not a pivot column, the corresponding variable is free and the system of
equations has a nontrivial solution. Otherwise, the system has only the trivial solution.
2580 2 580 2 580
2710~01290~01290
4270 01290 0 000
−−
ªºªºªº
«»«»«»
−− − −
«»«»«»
«»«»«»
¬¼¬¼¬¼
The variable x
3
is free, so the system has a nontrivial solution.
2.
1230 1230
2340~0720
2490 0030
−−
ªºªº
«»«»
−−− −
«»«»
«»«»
¬¼¬¼
There is no free variable; the system has only the trivial solution.
3.
34 80 3 4 80
~
25 40 07/328/30
−− −
ªºª º
«»« »
¬¼¬ ¼
. The variable x
3
is free; the system has nontrivial
solutions. An alert student will realize that row operations are unnecessary. With only two equations,
there can be at most two basic variables. One variable must be free. Refer to Exercise 29 in Section
1.2.
32 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
4.
5320 5 3 20
~
3420 029/516/50
−−
ªºª º
«»« »
−− −
¬¼¬ ¼
. The variable x
3
is free; the system has nontrivial
solutions. As in Exercise 3, row operations are unnecessary.
5.
2240 2240 1120 1010
4480~0000~0110~0110
0330 0330 0000 0000
ªºªºªºªº
«»«»«»«»
−−−
«»«»«»«»
«»«»«»«»
−− −−
¬¼¬¼¬¼¬¼
13
23
0
0
00
xx
xx
+=
+=
=
. The variable x
3
is free, x
1
= x
3
, and x
2
= –x
3
.
In parametric vector form, the general solution is
13
233
33
1
1
1
xx
xxx
xx
−−
ªºª º
ª
º
«»« »
«
»
===
«»« »
«
»
«»« »
«
»
¬
¼¬¼¬ ¼
x
.
6.
12 3 0 1 2 3 0 1 2 3 0 1 0 1 0
21 3 0~0 3 3 0~0 1 10~0 1 10
11 0 0 0 3 3 0 0 0 0 0 0 0 0 0
−−
ªºªºªºªº
«»«»«»«»
−− − −
«»«»«»«»
«»«»«»«»
−−
¬¼¬¼¬¼¬¼
13
23
0
0
00
xx
xx
=
=
=
. The variable x
3
is free, x
1
= x
3
, and x
2
= x
3
.
In parametric vector form, the general solution is
13
233
33
1
1
1
xx
xxx
xx
ªºªº
ª
º
«»«»
«
»
===
«»«»
«
»
«»«»
«
»
¬
¼¬¼¬¼
x
.
7.
13 37 0 10 9 80
~
01 450 01 4 50
−−
ªºª º
«»« »
−−
¬¼¬ ¼
.
134
234
980
450
xxx
xxx
+=
+=
The basic variables are x
1
and x
2
, with x
3
and x
4
free. Next, x
1
= –9x
3
+ 8x
4
, and x
2
= 4x
3
– 5x
4
. The
general solution is
134 4
3
234 4
3
34
33 3
44 4
98 8
998
45 5
445
010
001
xxx x
x
xxx x
xxx
xx x
xx x
+−−
ªºª º ª º
ªº ªºªº
«»« » « »
«» «»«»
−−
«»« » « »«» «»«»
== = + = +
«»« » « »
«» «»«»
«»« » « »
«» «»«»
«» «»«»
«»« » « »
¬¼ ¬¼¬¼¬¼¬ ¼ ¬ ¼
x
8.
13850 10270
~
01240 01240
−− −−
ªºªº
«»«»
−−
¬¼¬¼
.
134
234
270
240
xxx
xxx
−−=
+=
The basic variables are x
1
and x
2
, with x
3
and x
4
free. Next, x
1
= 2x
3
+ 7x
4
and x
2
= –2x
3
+ 4x
4
. The
general solution in parametric vector form is
1.5 Solutions 33
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
134 4
3
234 4
3
34
33 3
44 4
27 7
227
24 4
224
010
001
xxx x
x
xxx x
xxx
xx x
xx x
+
ªºª º ª º
ªº ªºªº
«»« » « »
«» «»«»
+−−
«»« » « »
«» «»«»
== = + = +
«»« » « »
«» «»«»
«»« » « »
«» «»«»
¬¼ ¬¼¬¼
¬¼¬ ¼ ¬ ¼
x
9.
3660 1220 1220 1200
~~~
2420 2420 0020 0010
−−
ªºªºªºªº
«»«»«»«»
−− −−
¬¼¬¼¬¼¬¼
12
3
20
0
xx
x
=
=
.
The solution is x
1
= 2x
2
, x
3
= 0, with x
2
free. In parametric vector form,
22
222
3
22 2
1
00
xx
xxx
x
ªºªº ªº
«»
«» «»
===
«»
«» «»
«»«» «»
¬¼ ¬¼¬¼
x
.
10.
14040140401404010040
~~~
28080280800100001000
−− −
ªºªºªºªº
«»«»«»«»
−−
¬¼¬¼¬¼¬¼
14
2
40
0
xx
x
+=
=
.
The basic variables are x
1
and x
2
, so x
3
and x
4
are free. (Note that x
3
is not zero.) Also, x
1
= –4x
4
. The
general solution is
14 4
2
34
333
44 4
44
004
00
000
010
001
xx x
xxx
xxx
xx x
−−
ªºª º ª º
ª
ºªºªº
«»« » « »
«
»«»«»
«»« » « »
«
»«»«»
== =+ = +
«»« » « »
«
»«»«»
«»« » « »
«
»«»«»
¬
¼¬¼¬¼
¬¼¬ ¼ ¬ ¼
x
11.
1420350 1420070 1400050
0010010 0010010 0010010
~~
0000140 0000140 0000140
0000000 0000000 0000000
−− − −−
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
−−
«»«»«»
«»«»«»
¬¼¬¼¬¼
12 6
36
56
450
0
40
00
xx x
xx
xx
+=
=
=
=
.
The basic variables are x
1
, x
3
, and x
5
. The remaining variables are free. In particular, x
4
is free (and
not zero as some may assume). The solution is x
1
= 4x
2
– 5x
6
, x
3
= x
6
, x
5
= 4x
6
, with x
2
, x
4
, and x
6
free.
In parametric vector form,
34 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
126 6
2
22 2
36 6
24
44 4
56 6
66 6
45 5
40 40
0
010
00 0 0
0
00
44
00 0
00 0
xxx x
x
xx x
xx xxx
xx x
xx x
xx x
−−
ªºª º ªº
ªºªº ªº
«»« » «»
«»«» «»
«»« » «»
«»«» «»
«»« » «»
«»«» «»
== = ++ = +
«»« » «»
«»«» «»
«»« » «»
«»«» «»
«»« » «»
«»«» «»
«»« » «»
«»«» «»
«»«» «»
«»« » «»
¬¼¬¼ ¬¼
¬¼¬ ¼ ¬¼
x6
5
0
1
10
04
01
x
ª
ºªº
«
»«»
«
»«»
«
»«»
+
«
»«»
«
»«»
«
»«»
«
»«»
«
»«»
¬
¼¬¼
Note
: The Study Guide discusses two mistakes that students often make on this type of problem.
12.
1236500123650012302900
000146000014000001400
~~
000001000000100000010
000000000000000000000
−− −−
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
123 5
45
6
23 29 0
40
0
00
xxx x
xx
x
++=
+=
=
=
.
The basic variables are x
1
, x
4
, and x
6
; the free variables are x
2
, x
3
, and x
5
. The general solution is
x
1
= 2x
2
– 3x
3
– 29x
5
, x
4
= – 4x
5
, and x
6
= 0. In parametric vector form, the solution is
123 5 2 3 5
222
333
2
455
555
6
2329 2 3 29 2
00 1
000
40040
00 0
00000
xxx x x x x
xxx
xxx
x
xxx
xxx
x
−− − −
ªºªºªºªºªºªº
«»«»«»«»«»«»
«»«»«»«»«»«»
«»«»«»«»«»«»
== =+ + =
«»«»« » «»«»«»
−−
«»«»« » «»«»«»
«»«»« » «»«»«
«»«»« » «»«»«
«»« » «»«»«
«»¬¼¬ ¼ ¬¼¬¼¬¼
¬¼
x
35
329
00
10
04
01
00
xx
−−
ª
ºªº
«
»«»
«
»«»
«
»«»
++
«
»«»
«
»«»
»
«» « »
»
«» « »
»
«» « »
¬
¼¬¼
13. To write the general solution in parametric vector form, pull out the constant terms that do not
involve the free variable:
13 3
23 33 3
33 3
54 5 4 5 4
27 2 7 2 7 .
001
xx x
xx xx x
xx x
+
ªºª º ª º
ªº ªº ªº
«»« » « »
«» «» «»
==−− =+=+=+
«»« » « »
«» «» «»
«»« » « »
«» «» «»
¬¼ ¬¼ ¬¼
¬¼¬ ¼ ¬ ¼
↑↑
xpq
pq
Geometrically, the solution set is the line through
5
2
0
ª
º
«
»
«
»
«
»
¬
¼
in the direction of
4
7
1
ª
º
«
»
«
»
«
»
¬
¼
.
1.5 Solutions 35
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
14. To write the general solution in parametric vector form, pull out the constant terms that do not
involve the free variable:
14 4
24 4
44
34 4
44 4
55
005
32 2
332
25 5
225
001
xx x
xx xxx
xx x
xx x
ªºª º ª º
ªº ªº ª º
«»« » « »
«» «» « »
−−
«»« » « »
«» «» « »
== =+ =+ =+
«»« » « »
«» «» « »
+
«»« » « »
«» «» « »
¬¼ ¬¼ ¬ ¼
¬¼¬ ¼ ¬ ¼
↑↑
xpq
pq
The solution set is the line through p in the direction of q.
15. Solve x
1
+ 5x
2
– 3x
3
= –2 for the basic variable: x
1
= –2 – 5x
2
+ 3x
3
, with x
2
and x
3
free. In vector
form, the solution is
123 23
22 2 23
33 3
25 3 2 5 3 2 5 3
00010
00 0 0 1
xxx xx
xx x xx
xx x
−− +−− − −
ªºª º ª ºªºª º ªº ªº ªº
«»« » « »
«»« » «» «» «»
== =+ + =+ +
«»« » « »
«»« » «» «» «»
«»« » « »«»« » «» «» «»
¬¼¬ ¼ ¬¼ ¬¼ ¬¼¬¼¬ ¼ ¬ ¼
x
The solution of x
1
+ 5x
2
– 3x
3
= 0 is x
1
= – 5x
2
+ 3x
3
, with x
2
and x
3
free. In vector form,
12323
22 2 23
33 3
53 5 3 5 3
010
001
xxxxx
xx x xx
xx x
+−−
ªºª º ª ºªº ªºªº
«»« » « »
«» «»«»
== = + = +
«»« » « »
«» «»«»
«»« » « »«» «»«»
¬¼ ¬¼¬¼¬¼¬ ¼ ¬ ¼
x
= x
2
u + x
3
v
The solution set of the homogeneous equation is the plane through the origin in R
3
spanned by
u and v. The solution set of the nonhomogeneous equation is parallel to this plane and passes through
the point p =
2
0
0
ªº
«»
«»
«»
¬¼
.
16. Solve x
1
– 2x
2
+ 3x
3
= 4 for the basic variable: x
1
= 4 + 2x
2
– 3x
3
, with x
2
and x
3
free. In vector form,
the solution is
123 23
22 2 23
33 3
42 3 4 2 3 4 2 3
00010
00 0 0 1
xxx xx
xx x xx
xx x
+−− −
ªºª º ª ºªº ª º ªº ªº ª º
«»« » « »
«» « » «» «» « »
== =+ + =+ +
«»« » « »
«» « » «» «» « »
«»« » « »«» « » «» «» « »
¬¼ ¬ ¼ ¬¼ ¬¼ ¬ ¼¬¼¬ ¼ ¬ ¼
x
The solution of x
1
– 2x
2
+ 3x
3
= 0 is x
1
= 2x
2
– 3x
3
, with x
2
and x
3
free. In vector form,
1232 3
22 2 23
33 3
23 2 3 2 3
010
001
xxxx x
xx x xx
xx x
−− −
ªºª º ª ºªº ªº ªº
«»« » « »
«» «» «»
== = + = +
«»« » « »
«» «» «»
«»« » « »«» «» «»
¬¼ ¬¼ ¬¼¬¼¬ ¼ ¬ ¼
x
= x
2
u + x
3
v
The solution set of the homogeneous equation is the plane through the origin in R
3
spanned by u and
v. The solution set of the nonhomogeneous equation is parallel to this plane and passes through the
point p =
4
0
0
ªº
«»
«»
«»
¬¼
.
36 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
17. Row reduce the augmented matrix for the system:
224 8 2248 2248 1124 1018
44816~0000~03312~0114~0114
03312 03312 0000 0000 0000
ªºªºªºªºªº
«»«»«»«»«»
−−−− −− − −
«»«»«»«»«»
«»«»«»«»«»
−− −−
¬¼¬¼¬¼¬¼¬¼
13
23
8
4
00
xx
xx
+=
+=
=
. Thus x
1
= 8 – x
3
, x
2
= –4 – x
3
, and x
3
is free. In parametric vector form,
13 3
23 3 3
33 3
88 81
44 41
001
xx x
xx x x
xx x
−− −
ªºª º ª º
ª
ºªºªº
«»« » « »
«
»«»«»
==−− =+=+
«»« » « »
«
»«»«»
«»« » « »
«
»«»«»
¬
¼¬¼¬¼¬¼¬ ¼ ¬ ¼
x
The solution set is the line through
8
4
0
ªº
«»
«»
«»
¬¼
, parallel to the line that is the solution set of the
homogeneous system in Exercise 5.
18. Row reduce the augmented matrix for the system:
12 3 5 1 2 3 5 1 2 3 5 1 0 1 7
21 313~0 3 3 3~0 1 1 1~0 1 1 1
11 0 8 0 3 3 3 0 0 0 0 0 0 0 0
−−
ªºªºªºªº
«»«»«»«»
−− −
«»«»«»«»
«»«»«»«»
−− −
¬¼¬¼¬¼¬¼
13
23
7
1
00
xx
xx
=
=
=
. Thus x
1
= 7 + x
3
, x
2
= –1 + x
3
, and x
3
is free. In parametric vector form,
13 3
23 3 3
33 3
77 71
11 11
001
xx x
xx x x
xx x
+
ªºª º ªº
ª
ºªºªº
«»« » «»
«
»«»«»
==+=+=+
«»« » «»
«
»«»«»
«»« » «»
«
»«»«»
¬
¼¬¼¬¼¬¼¬ ¼ ¬¼
x
The solution set is the line through
7
1
0
ªº
«»
«»
«»
¬¼
, parallel to the line that is the solution set of the
homogeneous system in Exercise 6.
19. The line through a parallel to b can be written as x = a + t b, where t represents a parameter:
x =
1
2
25
03
xt
x
−−
ªºª º ª º
=+
«»«» «»
¬¼ ¬¼
¬¼
, or
1
2
25
3
xt
xt
=−−
®=
¯
20. The line through a parallel to b can be written as x = a + tb, where t represents a parameter:
x =
1
2
37
26
xt
x
ªºªº ªº
=+
«»«» «»
¬¼ ¬¼
¬¼
, or
1
2
37
26
xt
xt
=
®=+
¯
1.5 Solutions 37
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
21. The line through p and q is parallel to q – p. So, given
34
and
31
ª
ºªº
==
«
»«»
¬
¼¬¼
pq
, form
43 1
1(3) 4
ªºªº
==
«»«»
−−
¬¼¬¼
qp
, and write the line as x = p + t(q – p) =
31
34
t
ª
ºªº
+
«
»«»
¬
¼¬¼
.
22. The line through p and q is parallel to q – p. So, given
30
and
23
ª
ºªº
==
«
»«»
¬
¼¬¼
pq
, form
0(3) 3
32 5
−−
ªºªº
==
«»«»
−− −
¬¼¬¼
qp
, and write the line as x = p + t(q – p) =
33
25
t
ºªº
+
»«»
¼¬¼
Note
: Exercises 21 and 22 prepare for Exercise 26 in Section 1.8.
23. a. True. See the first paragraph of the subsection titled Homogeneous Linear Systems.
b. False. The equation Ax = 0 gives an implicit description of its solution set. See the subsection
entitled Parametric Vector Form.
c. False. The equation Ax = 0 always has the trivial solution. The box before Example 1 uses the
word nontrivial instead of trivial.
d. False. The line goes through p parallel to v. See the paragraph that precedes Fig. 5.
e. False. The solution set could be empty! The statement (from Theorem 6) is true only when there
exists a vector p such that Ap = b.
24. a. False. The trivial solution is always a solution to a homogeneous system of linear equations.
b. False. A nontrivial solution of Ax = 0 is any nonzero x that satisfies the equation. See the
sentence before Example 2.
c. True. See the paragraph following Example 3.
d. True. If the zero vector is a solution, then b = Ax = A0 = 0.
e. True. See Theorem 6.
25. Suppose p satisfies Ax = b. Then Ap = b. Theorem 6 says that the solution set of Ax = b equals the
set S ={w : w = p + v
h
for some v
h
such that Av
h
= 0}. There are two things to prove: (a) every vector
in S satisfies Ax = b, (b) every vector that satisfies Ax = b is in S.
a. Let w have the form w = p + v
h
, where Av
h
= 0. Then
Aw = A(p + v
h
) = Ap + Av
h
. By Theorem 5(a) in section 1.4
= b + 0 = b
So every vector of the form p + v
h
satisfies Ax = b.
b. Now let w be any solution of Ax = b, and set v
h
= w p. Then
Av
h
= A(wp) = AwAp = bb = 0
So v
h
satisfies Ax = 0. Thus every solution of Ax = b has the form w = p + v
h
.
26. When A is the 3×3 zero matrix, every x in R
3
satisfies Ax = 0. So the solution set is all vectors in R
3
.
27. (Geometric argument using Theorem 6.) Since Ax = b is consistent, its solution set is obtained by
translating the solution set of Ax = 0, by Theorem 6. So the solution set of Ax = b is a single vector if
and only if the solution set of Ax = 0 is a single vector, and that happens if and only if Ax = 0 has
only the trivial solution.
38 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
(Proof using free variables.) If Ax = b has a solution, then the solution is unique if and only if there
are no free variables in the corresponding system of equations, that is, if and only if every column of
A is a pivot column. This happens if and only if the equation Ax = 0 has only the trivial solution.
28. a. When A is a 3×3 matrix with three pivot positions, the equation Ax = 0 has no free variables and
hence has no nontrivial solution.
b. With three pivot positions, A has a pivot position in each of its three rows. By Theorem 4 in
Section 1.4, the equation Ax = b has a solution for every possible b. The term "possible" in the
exercise means that the only vectors considered in this case are those in R
3
, because A has three
rows.
29. a. When A is a 4×4 matrix with three pivot positions, the equation Ax = 0 has three basic variables
andone free variable. So Ax = 0 has a nontrivial solution.
b. With only three pivot positions, A cannot have a pivot in every row, so by Theorem 4 in Section
1.4, the equation Ax = b cannot have a solution for every possible b (in R
4
).
30. a. When A is a 2×5 matrix with two pivot positions, the equation Ax = 0 has two basic variables and
three free variables. So Ax = 0 has a nontrivial solution.
b. With two pivot positions and only two rows, A has a pivot position in every row. By Theorem 4
in Section 1.4, the equation Ax = b has a solution for every possible b (in R
2
).
31. a. When A is a 3×2 matrix with two pivot positions, each column is a pivot column. So the equation
Ax = 0 has no free variables and hence no nontrivial solution.
b. With two pivot positions and three rows, A cannot have a pivot in every row. So the equation Ax
= b cannot have a solution for every possible b (in R
3
), by Theorem 4 in Section 1.4.
32. No. If the solution set of Ax = b contained the origin, then 0 would satisfy A0= b, which is not true
since b is not the zero vector.
33. Look for A = [a
1
a
2
a
3
] such that 1 a
1
+ 1 a
2
+ 1 a
3
= 0. That is, construct A so that each row sum
(the sum of the entries in a row) is zero.
34. Look for A = [a
1
a
2
a
3
] such that 2 a
1
– 1 a
2
+ 1 a
3
= 0. That is, construct A so that subtracting the
third column from the second column is twice the first column.
35. Look at
12
13
721
26
xx
−−
ªº ªº
«» «»
+
«» «»
«» «»
−−
¬¼ ¬¼
and notice that the second column is 3 times the first. So suitable values
for x
1
and x
2
would be 3 and –1 respectively. (Another pair would be 6 and –2, etc.) Thus
3
1
ª
º
=
«
»
¬
¼
x
satisfies Ax = 0.
36. Inspect how the columns a
1
and a
2
of A are related. The second column is –2/3 times the first. Put
another way, 2a
1
+ 3a
2
= 0. Thus
2
3
ªº
«»
¬¼
satisfies Ax = 0.
Note
: Exercises 35 and 36 set the stage for the concept of linear dependence.
1.6 Solutions 39
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
37. Since the solution set of Ax = 0 contains the point (4,1), the vector x = (4,1) satisfies Ax = 0. Write
this equation as a vector equation, using a
1
and a
2
for the columns of A:
4
a
1
+ 1 a
2
= 0
Then a
2
= –4a
1
. So choose any nonzero vector for the first column of A and multiply that column by
– 4 to get the second column of A. For example, set
14
14
A
ª
º
=
«
»
¬
¼
.
Finally, the only way the solution set of Ax = b could not be parallel to the line through (1,4) and the
origin is for the solution set of Ax = b to be empty. This does not contradict Theorem 6, because that
theorem applies only to the case when the equation Ax = b has a nonempty solution set. For b, take
any vector that is not a multiple of the columns of A.
Note
: In the Study Guide, a “Checkpoint” for Section 1.5 will help students with Exercise 37.
38. If w satisfies Ax = 0, then Aw = 0. For any scalar c, Theorem 5(b) in Section 1.4 shows that
A(cw)=cAw = c0 = 0.
39. Suppose Av = 0 and Aw = 0. Then, since A(v + w) = Av + Aw by Theorem 5(a) in Section 1.4,
A(v + w) = Av + Aw = 0 + 0 = 0.
Now, let c and d be scalars. Using both parts of Theorem 5,
A(cv + dw) = A(cv) + A(dw) = cAv + dAw = c0 + d0 = 0.
40. No. If Ax = b has no solution, then A cannot have a pivot in each row. Since A is 3×3, it has at most
two pivot positions. So the equation Ax = y for any y has at most two basic variables and at least one
free variable. Thus, the solution set for Ax = y is either empty or has infinitely many elements.
Note
: The MATLAB box in the Study Guide introduces the zeros command, in order to augment a
matrix with a column of zeros.
1.6 SOLUTIONS
1. Fill in the exchange table one column at a time. The entries in a column describe where a sector's
output goes. The decimal fractions in each column sum to 1.
Distribution of
Output From:
Goods Services Purchased by:
output input
.2 .7 Goods
.8 .3 Services
↓↓
Denote the total annual output (in dollars) of the sectors by p
G
and p
S
. From the first row, the total
input to the Goods sector is .2 p
G
+ .7 p
S
. The Goods sector must pay for that. So the equilibrium
prices must satisfy
income expenses
=.2 .7
GGS
ppp+
40 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
From the second row, the input (that is, the expense) of the Services sector is .8 p
G
+ .3 p
S
.
The equilibrium equation for the Services sector is
income expenses
=.8 .3
SGS
ppp+
Move all variables to the left side and combine like terms:
.8 .7 0
.8 .7 0
GS
GS
pp
pp
=
+=
Row reduce the augmented matrix:
.8 .7 0 .8 .7 0 1 .875 0
~~
.8 .7 0 0 0 0 0 0 0
−−
ªºªºªº
«»«»«»
¬¼¬¼¬¼
The general solution is p
G
= .875 p
S
, with p
S
free. One equilibrium solution is p
S
= 1000 and p
G
=
875. If one uses fractions instead of decimals in the calculations, the general solution would be
written p
G
= (7/8) p
S
, and a natural choice of prices might be p
S
= 80 and p
G
= 70. Only the ratio of
the prices is important: p
G
= .875 p
S
. The economic equilibrium is unaffected by a proportional
change in prices.
2. Take some other value for p
S
, say 200 million dollars. The other equilibrium prices are then
p
C
= 188 million, p
E
= 170 million. Any constant nonnegative multiple of these prices is a set of
equilibrium prices, because the solution set of the system of equations consists of all multiples of one
vector. Changing the unit of measurement to another currency such as Japanese yen has the same
effect as multiplying all equilibrium prices by a constant. The ratios of the prices remain the same,
no matter what currency is used.
3. a. Fill in the exchange table one column at a time. The entries in a column describe where a sector’s
output goes. The decimal fractions in each column sum to 1.
Distribution of Output From :
Purchased by :
Fuels and Power Manufacturing Services
output input
.10 .10 .20 Fuels and Power
.80 .10 .40 Manufacturing
.10 .80 .40 Services
↓↓
b. Denote the total annual output (in dollars) of the sectors by p
F
, p
M
, and p
S
. From the first row of
the table, the total input to the Fuels & Power sector is .1p
F
+ .1p
M
+ .2p
S
. So the equilibrium
prices must satisfy
income expenses
=.1 .1 .2
FFMS
pppp++
From the second and third rows of the table, the income/expense requirements for the
Manufacturing sector and the Services sector are, respectively,
.8 .1 .4
.1 .8 .4
MFMS
SFM S
pppp
ppp p
=++
=+ +
1.6 Solutions 41
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Move all variables to the left side and combine like terms:
.9 – .1 – .2 0
–.8 .9 – .4 0
–.1 – .8 .6 0
FM S
FMS
FMS
pp p
pp p
pp p
=
+=
+=
.9 .1 .2 0
.8 .9 .4 0
.1 .8 .6 0
−−
ª
º
«
»
−−
«
»
«
»
−−
¬
¼
c. [M] You can obtain the reduced echelon form with a matrix program.
.9 .1 .2 0 1 0 .301 0 The number of decimal
.8 .9 .4 0 ~ 0 1 .712 0 places displayed is
.1 .8 .6 0 0 0 0 0 somewhat arbitrary.
−− −
ªºªº
«»«»
−− −
«»«»
«»«»
−−
¬¼¬¼
The general solution is p
F
= .301 p
S
, p
M
= .712 p
S
, with p
S
free. If p
S
is assigned the value of 100,
then p
F
= 30.1 and p
M
= 71.2. Note that only the ratios of the prices are determined. This makes
sense, for if they were converted from, say, dollars to yen or Euros, the inputs and outputs of each
sector would still balance. The economic equilibrium is not affected by a proportional change in
prices.
4. a. Fill in the exchange table one column at a time. The entries in each column must sum to 1.
Distribution of Output From :
Purchased by :
Mining Lumber Energy Transportation
output input
.30 .15 .20 .20 Mining
.10 .15 .15 .10 Lumber
.60 .50 .45 .50 Energy
0 .20 .20 .20 Transportation
↓↓↓ ↓
b. [M] Denote the total annual output of the sectors by p
M
, p
L
, p
E
, and p
T
, respectively. From the first
row of the table, the total input to Agriculture is .30p
M
+ .15p
L
+ .20p
E
+ .20 p
T
. So the
equilibrium prices must satisfy
income expenses
.30 .15 .20 .20
MMLET
ppppp
=++ +
From the second, third, and fourth rows of the table, the equilibrium equations are
T
.10 .15 .15 .10
.60 .50 .45 .50
.20 .20 .20
LMLET
EMLET
TLE
ppppp
ppppp
pppp
=+++
=+++
=++
Move all variables to the left side and combine like terms:
.70 .15 .20 .20 0
.10 .85 .15 .10 0
.60 .50 .55 .50 0
.20 .20 .80 0
MLET
MLET
MLET
LET
pppp
pppp
pppp
ppp
−−=
+−−=
−−+=
−− +=
Reduce the augmented matrix to reduced echelon form:
42 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
.70 .15 .20 .20 0 1 0 0 1.37 0
.10 .85 .15 .10 0 0 1 0 .84 0
~
.60 .50 .55 .50 0 0 0 1 3.16 0
0 .20 .20 .80 0 0 0 0 0 0
−−− −
ªºªº
«»«»
−−− −
«»«»
«»«»
−− −
«»«»
−−
¬¼¬¼
Solve for the basic variables in terms of the free variable p
T
, and obtain p
M
= 1.37p
T
, p
L
= .84p
T
,
and p
E
= 3.16p
T
. The data probably justifies at most two significant figures, so take p
T
= 100 and
round off the other prices to p
M
= 137, p
L
= 84, and p
E
= 316.
5. a. Fill in the exchange table one column at a time. The entries in each column must sum to 1.
Distribution of Output From :
Purchased by :
Agriculture Manufacturing Services Transportation
output input
.20 .35 .10 .20 Agriculture
.20 .10 .20 .30 Manufacturing
.30 .35 .50 .20 Services
.30 .20 .20 .30 Transportation
↓↓
b. [M] Denote the total annual output of the sectors by p
A
, p
M
, p
S
, and p
T
, respectively. The
equilibrium equations are
T
.20 .35 .10 .20
.20 .10 .20 .30
.30 .35 .50 .20
.30 .20 .20 .30
A
AM ST
MAMST
SAMST
TAMS
ppppp
ppppp
ppppp
ppppp
=+++
=+++
=+++
=+++
Move all variables to the left side and combine like terms:
.80 .35 .10 .20 0
.20 .90 .20 .30 0
.30 .35 .50 .20 0
.30 .20 .20 .70 0
AM ST
AM ST
AM ST
AM ST
pp pp
pp pp
pp pp
pp pp
−−=
+−−=
−− +=
−− −+=
Reduce the augmented matrix to reduced echelon form:
.80 .35 .10 .20 0 1 0 0 .799 0
.20 .90 .20 .30 0 0 1 0 .836 0
~
.30 .35 .50 .20 0 0 0 1 1.465 0
.30 .20 .20 .70 0 0 0 0 0 0
−−− −
ªºªº
«»«»
−−− −
«»«»
«»«»
−− −
«»«»
−−−
¬¼¬¼
Solve for the basic variables in terms of the free variable p
T
, and obtain p
A
= .799p
T
, p
M
= .836p
T
,
and p
S
= 1.465p
T
. Take p
T
= $10.00 and round off the other prices to p
A
= $7.99, p
M
= $8.36, and
p
S
= $14.65 per unit.
c. Construct the new exchange table one column at a time. The entries in each column must sum to 1.
1.6 Solutions 43
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Distribution of Output From :
Purchased by :
Agriculture Manufacturing Services Transportation
output input
.20 .35 .10 .20 Agriculture
.10 .10 .20 .30 Manufacturing
.40 .35 .50 .20 Services
.30 .20 .20 .30 Transportation
↓↓
d. [M] The new equilibrium equations are
T
.20 .35 .10 .20
.10 .10 .20 .30
.40 .35 .50 .20
.30 .20 .20 .30
A
AM ST
MAMST
SAMST
TAMS
ppppp
ppppp
ppppp
ppppp
=+++
=+++
=+++
=+++
Move all variables to the left side and combine like terms:
.80 .35 .10 .20 0
.10 .90 .20 .30 0
.40 .35 .50 .20 0
.30 .20 .20 .70 0
AM ST
AM ST
AM ST
AM ST
pp pp
pp pp
pp pp
pp pp
−−=
+−−=
−− +=
−− −+=
Reduce the augmented matrix to reduced echelon form:
.80 .35 .10 .20 0 1 0 0 .781 0
.10 .90 .20 .30 0 0 1 0 .767 0
~
.40 .35 .50 .20 0 0 0 1 1.562 0
.30 .20 .20 .70 0 0 0 0 0 0
−−− −
ªºªº
«»«»
−−− −
«»«»
«»«»
−− −
«»«»
−−−
¬¼¬¼
Solve for the basic variables in terms of the free variable p
T
, and obtain p
A
= .781p
T
, p
M
= .767p
T
,
and
p
S
= 1.562p
T
. Take p
T
= $10.00 and round off the other prices to p
A
= $7.81, p
M
= $7.67, and
p
S
= $15.62 per unit. The campaign has caused unit prices for the Agriculture and
Manufacturing sectirs to go down slightly, while increasing the unit price for the Services sector
to increase by $.10 per unit. The campaign has benefited the Services sector the most.
6. The following vectors list the numbers of atoms of aluminum (Al), oxygen (O), and carbon (C):
23 2
20 1 0
aluminum
Al O : 3 , C: 0 , Al: 0 , CO : 2 oxygen
01 0 1
carbon
ªº ªº ªº ªº
«» «» «» «»
«» «» «» «»
«» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼
The coefficients in the equation x
1
Al
2
O
3
+ x
2
C x
3
Al + x
4
CO
2
satisfy
44 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1234
2010
3002
0101
xx xx
ªº ªº ªº ªº
«» «» «» «»
+=+
«» «» «» «»
«» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼
Move the right terms to the left side (changing the sign of each entry in the third and fourth vectors)
and row reduce the augmented matrix of the homogeneous system:
20 1 00 10 1/2 00 10 1/2 00
30 0 20~30 0 20~00 3/2 20
01 0 10 01 0 10 01 0 10
−− −
ªºª ºª º
«»« »« »
−−−
«»« »« »
«»« »« »
−−−
¬¼¬ ¼¬ ¼
10 1/2 00 10 1/2 00 100 2/30
~0 1 0 1 0~0 1 0 1 0~0 1 0 1 0
00 3/2 20 00 1 4/30 001 4/30
−− −
ªºª ºªº
«»« »«»
−−
«»« »«»
«»« »«»
−−
¬¼¬ ¼¬¼
The general solution is x
1
= (2/3)x
4
, x
2
= x
4
, x
3
= (4/3)x
4
, with x
4
free. Take x
4
= 3. Then x
1
= 2,
x
2
= 3, and x
3
= 4. The balanced equation is
2Al
2
O
3
+ 3C 4Al + 3CO
2
7. The following vectors list the numbers of atoms of sodium (Na), hydrogen (H), carbon (C), and
oxygen (O):
33657 3657 2 2
1 0 300sodium
1 8 520hydrogen
NaHCO : , H C H O : , Na C H O : , H O : , CO :
1 6 601carbon
3 7 712oxygen
ªº ªº ªº ªº ªº
«» «» «» «» «»
«» «» «» «» «»
«» «» «» «» «»
«» «» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼ ¬¼
The order of the various atoms is not important. The list here was selected by writing the elements in
the order in which they first appear in the chemical equation, reading left to right:
x
1
· NaHCO
3
+ x
2
· H
3
C
6
H
5
O
7
x
3
· Na
3
C
6
H
5
O
7
+ x
4
· H
2
O + x
5
· CO
2
.
The coefficients x
1
, …, x
5
satisfy the vector equation
12 3 4 5
10300
18520
16601
37712
xx x x x
ªº ªº ªº ªº ªº
«» «» «» «» «»
«» «» «» «» «»
+=++
«» «» «» «» «»
«» «» «» «» «»
«» «» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼ ¬¼
Move all the terms to the left side (changing the sign of each entry in the third, fourth, and fifth
vectors) and reduce the augmented matrix:
10 3 0 0 0 1000 1 0
18 5 2 00 0100 1/30
~
16 6 0 10 0010 1/30
37 7 1 2 0 0001 1 0
−−
ªºªº
«»«»
−−
«»«»
«»«»
−−
«»«»
−−
¬¼¬¼
The general solution is x
1
= x
5
, x
2
= (1/3)x
5
, x
3
= (1/3)x
5
, x
4
= x
5
, and x
5
is free. Take x
5
= 3. Then x
1
=
x
4
= 3, and x
2
= x
3
= 1. The balanced equation is
3NaHCO
3
+ H
3
C
6
H
5
O
7
Na
3
C
6
H
5
O
7
+ 3H
2
O + 3CO
2
1.6 Solutions 45
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
8. The following vectors list the numbers of atoms of hydrogen (H), oxygen (O), calcium (Ca), and
carbon (C):
332 2
30200
13102
HO: , CaCO : , H : , Ca: , CO:
01010
01001
hydrogen
oxygen
Ocalcium
carbon
ªº ªº ªº ªº ªº
«» «» «» «» «»
«» «» «» «» «»
«» «» «» «» «»
«» «» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼ ¬¼
The coefficients in the chemical equation
x
1
H
3
O + x
2
CaCO
3
x
3
H
2
O + x
4
Ca + x
5
CO
2
satisfy the vector equation
12 345
30200
13102
01010
01001
xx x x x
ªº ªº ªº ªº ªº
«» «» «» «» «»
«» «» «» «» «»
+=++
«» «» «» «» «»
«» «» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼ ¬¼
Move the terms to the left side (changing the sign of each entry in the last three vectors) and reduce
the augmented matrix:
30 2 0 00 1000 20
13 1 0 20 0100 10
~
01 0 1 00 0010 30
01 0 0 10 0001 10
−−
ªºªº
«»«»
−− −
«»«»
«»«»
−−
«»«»
−−
¬¼¬¼
The general solution is x
1
= 2x
5
, x
2
= x
5
, x
3
= 3x
5
, x
4
= x
5
, and x
5
is free. Take x
5
= 1. Then x
1
= 2, and
x
2
= x
4
= 1, and x
3
= 3. The balanced equation is
2H
3
O + CaCO
3
3H
2
O + Ca + CO
2
9.The following vectors list the numbers of atoms of boron (B), sulfur (S), hydrogen (H), and oxygen
(O):
23 233 2
20 10boron
30 01sulfur
BS: , HO: , HBO: , HS:
02 32hydrogen
01 30oxygen
ªº ªº ªº ªº
«» «» «» «»
«» «» «» «»
«» «» «» «»
«» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼
The coefficients in the equation x
1
B
2
S
3
+ x
2
H
2
O x
3
H
3
BO
3
+ x
4
H
2
S satisfy
1234
2010
3001
0232
0130
xx xx
ªº ªº ªº ªº
«» «» «» «»
«» «» «» «»
+=+
«» «» «» «»
«» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼
Move the terms to the left side (changing the sign of each entry in the third and fourth vectors) and
row reduce the augmented matrix of the homogeneous system:
20 1 00 100 1/30
30 0 10 010 2 0
~
02 3 20 001 2/30
01 3 00 000 0 0
−−
ªºª º
«»« »
−−
«»« »
«»« »
−−
«»«»
¬¼¬¼
46 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
The general solution is x
1
= (1/3) x
4
, x
2
= 2x
4
, x
3
= (2/3) x
4
, with x
4
free. Take x
4
= 3. Then x
1
= 1,
x
2
= 6, and x
3
= 2. The balanced equation is
B
2
S
3
+ 6H
2
O 2H
3
BO
3
+ 3H
2
S
10. [M] Set up vectors that list the atoms per molecule. Using the order lead (Pb), nitrogen (N),
chromium (Cr), manganese (Mn), and oxygen (O), the vector equation to be solved is
12 3456
103000lead
600001nitrogen
010200chromium
020010manganese
084321oxygen
xx xx xx
ªº ªº ªº ªº ªº ªº
«» «» «» «» «» «»
«» «» «» «» «» «»
«» «» «» «» «» «»
+=+++
«» «» «» «» «» «»
«» «» «» «» «» «»
«» «» «» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼ ¬¼ ¬¼
The general solution is x
1
= (1/6)x
6
, x
2
= (22/45)x
6
, x
3
= (1/18)x
6
, x
4
= (11/45)x
6
, x
5
= (44/45)x
6
, and x
6
is free. Take x
6
= 90. Then x
1
= 15, x
2
= 44, x
3
= 5, x
4
= 22, and x
5
= 88. The balanced equation is
15PbN
6
+ 44CrMn
2
O
8
5Pb
3
O
4
+ 22Cr
2
O
3
+ 88MnO
2
+ 90NO
11. [M] Set up vectors that list the atoms per molecule. Using the order manganese (Mn), sulfur (S),
arsenic (As), chromium (Cr), oxygen (O), and hydrogen (H), the vector equation to be solved is
12 3 456 7
1001000
1010030
0200100
0100 00 1 0
035440121
0021302
xx x x xx x
ªº ª º ªº ªº ªº ª º ªº
«» « » «» «» «» « » «»
«» « » «» «» «» « » «»
«» « » «» «» «» « » «»
++=+++
«» « » «» «» «» « » «
«» « » «» «» «» « » «
«» « » «» «» «» « » «
«» « » «» «» «» « » «
«» « » «» «» «» « » «
¬¼ ¬ ¼ ¬¼ ¬¼ ¬¼ ¬ ¼ ¬¼
manganese
sulfur
arsenic
chromium
oxygen
hydrogen
»
»
»
»
»
In rational format, the general solution is x
1
= (16/327)x
7
, x
2
= (13/327)x
7
, x
3
= (374/327)x
7
,
x
4
= (16/327)x
7
, x
5
= (26/327)x
7
, x
6
= (130/327)x
7
, and x
7
is free. Take x
7
= 327 to make the other
variables whole numbers. The balanced equation is
16MnS + 13As
2
Cr
10
O
35
+ 374H
2
SO
4
16HMnO
4
+ 26AsH
3
+ 130CrS
3
O
12
+ 327H
2
O
Note that some students may use decimal calculation and simply "round off" the fractions that relate
x
1
, ..., x
6
to x
7
. The equations they construct may balance most of the elements but miss an atom or
two. Here is a solution submitted by two of my students:
5MnS + 4As
2
Cr
10
O
35
+ 115H
2
SO
4
5HMnO
4
+ 8AsH
3
+ 40CrS
3
O
12
+ 100H
2
O
Everything balances except the hydrogen. The right side is short 1 hydrogen atom. Perhaps the
students thought that it escaped!
12. Write the equations for each intersection:
14 2
23
34
Intersection Flow in Flow out
A
B100
C80
xx x
xx
xx
+=
=+
+=
Rearrange the equations:
1.6 Solutions 47
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
12 4
23
34
0
100
80
xx x
xx
xx
+=
=
=
Reduce the augmented matrix:
1101 0 100020
0110100~~010120
001180 001180
ªºªº
«»«»
−⋅
«»«»
«»«»
−− −−
¬¼¬¼
The general solution (written in the style of Section 1.2) is
1
24
34
4
20
20
= 80 +
is free
x
xx
xx
x
=
°=+
°
®
°
°
¯
Since x
3
cannot be negative, the minimum value of x
4
is 80.
13. Write the equations for each intersection:
21
35 24
65
46
13
Intersection Flow in Flow out
A3080
B
C100 40
D4090
E60 20
xx
xx xx
xx
xx
xx
+=+
+= +
+=+
+= +
+=+
Rearrange the equations:
12
2345
56
46
13
50
0
60
50
40
xx
xxxx
xx
xx
xx
=
+=
=
=
=
Reduce the augmented matrix:
11000050 10100040
011110 0 01100010
~~
00001160 00010150
000 10150 00001160
10 1000 40 000000 0
−−
ªºªº
«»«»
−−
«»«»
«»«»
⋅⋅⋅
−−
«»«»
−−
«»«»
«»«»
−−
¬¼¬¼
48 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
a. The general solution is
13
23
3
46
56
6
40
10
is free
50
60
is free
xx
xx
x
xx
xx
x
=
°
=+
°
°
°
®
=+
°
°
=+
°
°
¯
b. To find minimum flows, note that since x
1
cannot be negative, x
3
> 40. This implies that
x
2
> 50. Also, since x
6
cannot be negative, x
4
> 50 and x
5
> 60. The minimum flows are
x
2
= 50, x
3
= 40, x
4
= 50, x
5
= 60 (when x
1
= 0 and x
6
= 0).
14. Write the equations for each intersection:
15
12 4
32
45 3
Intersection Flow in Flow out
A80
B100
C90
D90
xx
xx x
xx
xx x
=+
++ =
=+
+=+
Rearrange the equations:
15
12 4
23
345
80
100
90
90
xx
xx x
xx
xxx
+=
+=
=
−−=
Reduce the augmented matrix:
10 0 0 1 80 100 0 1 80
11 0 1 0 100 010 1 1 180
~~
01 1 0 0 90 001 1 1 90
00 1 1 1 90 000 0 0 0
ªºªº
«»«»
−− −−−
«»«»
⋅⋅⋅
«»«»
−− −
«»«»
−− −
¬¼¬¼
a. The general solution is
15
245
345
4
5
80
180
= 90
is free
is free
xx
xxx
xxx
x
x
=
°
=+
°
°
+
®
°
°
°
¯
b. If x
5
= 0, then the general solution is
1
24
34
4
80
180
= 90
is free
x
xx
xx
x
=
°
=
°
®
°
°
¯
c. Since x
2
cannot be negative, the minimum value of x
4
when x
5
= 0 is 180.
1.6 Solutions 49
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
15. Write the equations for each intersection.
61
12
23
34
45
56
Intersection Flow in Flow out
A60
B70
C100
D90
E80
F80
xx
xx
xx
xx
xx
xx
+=
=+
+=
=+
+=
=+
Rearrange the equations:
16
12
23
34
45
56
60
70
100
90
80
80
xx
xx
xx
xx
xx
xx
=
=
=
=
=
=
Reduce the augmented matrix:
10000 1 60 10000 160
110000 70 01000110
011000100 00100190
~~
001100 90 000101 0
000110 80 00001180
000011 80 000000 0
−−
ªºªº
«»«»
−−
«»«»
«»«»
−−
⋅⋅⋅
«»«»
−−
«»«»
«»«»
−−
«»«»
«»«»
¬¼¬¼
The general solution is
16
26
36
46
56
6
60
10
90
80
is free
xx
xx
xx
xx
xx
x
=+
°
=+
°
°
=+
°
®
=
°
°
=+
°
°
¯
.
Since x
2
cannot be negative, the minimum value of x
6
is 10.
Note: The MATLAB box in the Study Guide discusses rational calculations, needed for
balancing the chemical equations in Exercises 10 and 11. As usual, the appendices cover this
material for Maple, Mathematica, and the TI calculators.
50 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1.7 SOLUTIONS
Note:
Key exercises are 9–20 and 23–30. Exercise 30 states a result that could be a theorem in the text.
There is a danger, however, that students will memorize the result without understanding the proof, and
then later mix up the words row and column. Exercises 37 and 38 anticipate the discussion in Section 1.9
of one-to-one transformations. Exercise 44 is fairly difficult for my students.
1. Use an augmented matrix to study the solution set of x
1
u + x
2
v + x
3
w = 0 (*), where u, v, and w are
the three given vectors. Since
5790 5790
0240~0240
0680 0040
ªºªº
«»«»
«»«»
«»«»
−−
¬¼¬¼
, there are no free variables. So
the homogeneous equation (*) has only the trivial solution. The vectors are linearly independent.
2. Use an augmented matrix to study the solution set of x
1
u + x
2
v + x
3
w = 0 (*), where u, v, and w are
the three given vectors. Since
0010 20 30
2030~087/20
3810 00 10
ªºª º
«»« »
−−
«»« »
«»« »
−−
¬¼¬ ¼
, there are no free
variables. So the homogeneous equation (*) has only the trivial solution. The vectors are linearly
independent.
3. Use the method of Example 3 (or the box following the example). By comparing entries of the
vectors, one sees that the second vector is –2 times the first vector. Thus, the two vectors are linearly
dependent.
4. From the first entries in the vectors, it seems that the second vector of the pair
13
,
39
−−
ªºªº
«»«»
¬¼¬¼
may be 3
times the first vector. But there is a sign problem with the second entries. So neither of the vectors is
a multiple of the other. The vectors are linearly independent.
5. Use the method of Example 2. Row reduce the augmented matrix for Ax = 0:
0390 1420 1420 1420 1420
2170 2170 0930 0930 0930
~~~~
1450 14 50 0070 0070 0070
1420 0390 0 390 0080 0000
−−
ªºªºªºªºªº
«»«»«»«»«»
−−
«»«»«»«»«»
«»«»«»«»«»
−− −− − − −
«»«»«»«»«»
−− −
¬¼¬¼¬¼¬¼¬¼
There are no free variables. The equation Ax = 0 has only the trivial solution and so the columns of A
are linearly independent.
6. Use the method of Example 2. Row reduce the augmented matrix for Ax = 0:
43 00 11 50 11 50 11 50 11 50
01 50 01 50 01 50 01 50 01 50
~~~~
11 50 43 00 01200 00150 00150
21100 21100 01 00 00 50 00 00
−− − − − −
ªºªºªºªºªº
«»«»«»«»«»
−−
«»«»«»«»«»
«»«»«»«»«»
−−−−−
«»«»«»«»«»
−−− −
¬¼¬¼¬¼¬¼¬¼
1.7 Solutions 51
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
There are no free variables. The equation Ax = 0 has only the trivial solution and so the columns of A
are linearly independent.
7. Study the equation Ax = 0. Some people may start with the method of Example 2:
14300 14300 14300
27510~01110~01110
45750 011550 00660
−−
ªºªºªº
«»«»«»
−− − −
«»«»«»
«»«»«»
−− − −
¬¼¬¼¬¼
But this is a waste of time. There are only 3 rows, so there are at most three pivot positions. Hence, at
least one of the four variables must be free. So the equation Ax = 0 has a nontrivial solution and the
columns of A are linearly dependent.
8. Same situation as with Exercise 7. The (unnecessary) row operations are
12 320 12320 12 320
24620~00060~01 130
01130 01130 00 060
−−
ªºªºªº
«»«»«»
−− −
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
Again, because there are at most three pivot positions yet there are four variables, the equation
Ax = 0 has a nontrivial solution and the columns of A are linearly dependent.
9. a. The vector v
3
is in Span{v
1
, v
2
} if and only if the equation x
1
v
1
+ x
2
v
2
= v
3
has a solution. To find
out, row reduce [v
1
v
2
v
3
], considered as an augmented matrix:
135 13 5
397~00 8
26 00 10hh
−−
ªºª º
«»« »
−−
«»« »
«»« »
−−
¬¼¬ ¼
At this point, the equation 0 = 8 shows that the original vector equation has no solution. So v
3
is
in Span{v
1
, v
2
} for no value of h.
b. For {v
1
, v
2
, v
3
} to be linearly independent, the equation x
1
v
1
+ x
2
v
2
+ x
3
v
3
= 0 must have only the
trivial solution. Row reduce the augmented matrix [v
1
v
2
v
3
0]
1350 13 5 0 1350
3970~00 8 0~0080
26 0 00 100 0000hh
−− −
ªºª ºªº
«»« »«»
−−
«»« »«»
«»« »«»
−−
¬¼¬ ¼¬¼
For every value of h, x
2
is a free variable, and so the homogeneous equation has a nontrivial
solution. Thus {v
1
, v
2
, v
3
} is a linearly dependent set for all h.
10. a. The vector v
3
is in Span{v
1
, v
2
} if and only if the equation x
1
v
1
+ x
2
v
2
= v
3
has a solution. To find
out, row reduce [v
1
v
2
v
3
], considered as an augmented matrix:
132 13 2
395~00 1
515 0 0 10hh
−−
ªºª º
«»« »
−−
«»« »
«»« »
+
¬¼¬ ¼
At this point, the equation 0 = 1 shows that the original vector equation has no solution. So v
3
is
in Span{v
1
, v
2
} for no value of h.
52 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b. For {v
1
, v
2
, v
3
} to be linearly independent, the equation x
1
v
1
+ x
2
v
2
+ x
3
v
3
= 0 must have only the
trivial solution. Row reduce the augmented matrix [v
1
v
2
v
3
0]
1320 13 2 0 1320
3950~00 1 0~0010
515 0 0 0 10 0 0 0 0 0hh
−− −
ªºª ºªº
«»« »«»
−−
«»« »«»
«»« »«»
+
¬¼¬ ¼¬¼
For every value of h, x
2
is a free variable, and so the homogeneous equation has a nontrivial
solution. Thus {v
1
, v
2
, v
3
} is a linearly dependent set for all h.
11. To study the linear dependence of three vectors, say v
1
, v
2
, v
3
, row reduce the augmented matrix
[v
1
v
2
v
3
0]:
2420 24 20 24 20
2620~02 00~02 00
47 0 01 40 00 40hh h
−− −
ªºª ºª º
«»« »« »
−− −
«»« »« »
«»« »« »
++
¬¼¬ ¼¬ ¼
The equation x
1
v
1
+ x
2
v
2
+ x
3
v
3
= 0 has a nontrivial solution if and only if h + 4 = 0 (which
corresponds to x
3
being a free variable). Thus, the vectors are linearly dependent if and only if h = –4.
12. To study the linear dependence of three vectors, say v
1
, v
2
, v
3
, row reduce the augmented matrix
[v
1
v
2
v
3
0]:
3690 36 90
64 0~01 00
1330 00 180
h
h
−−
ªºª º
«»« »
−−
«»« »
«»« »
+
¬¼¬ ¼
The equation x
1
v
1
+ x
2
v
2
+ x
3
v
3
= 0 has a nontrivial solution if and only if h + 18 = 0 (which
corresponds to x
3
being a free variable). Thus, the vectors are linearly dependent if and only if h = –
18.
13. To study the linear dependence of three vectors, say v
1
, v
2
, v
3
, row reduce the augmented matrix
[v
1
v
2
v
3
0]:
1230 12 3 0
59 0~01 150
3690 00 0 0
hh
−−
ªºªº
«»«»
−−
«»«»
«»«»
−−
¬¼¬¼
The equation x
1
v
1
+ x
2
v
2
+ x
3
v
3
= 0 has a free variable and hence a nontrivial solution no matter what
the value of h. So the vectors are linearly dependent for all values of h.
14. To study the linear dependence of three vectors, say v
1
, v
2
, v
3
, row reduce the augmented matrix
[v
1
v
2
v
3
0]:
1320 13 20 13 2 0
2710~0 1 50~01 5 0
46 0 06 80 00 380hh h
−− −
ªºª ºª º
«»« »« »
«»« »« »
«»« »« »
−−++
¬¼¬ ¼¬ ¼
The equation x
1
v
1
+ x
2
v
2
+ x
3
v
3
= 0 has a nontrivial solution if and only if h + 38 = 0 (which
corresponds to x
3
being a free variable). Thus, the vectors are linearly dependent if and only
if h = –38.
1.7 Solutions 53
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
15. The set is linearly dependent, by Theorem 8, because there are four vectors in the set but only two
entries in each vector.
16. The set is linearly dependent because the second vector is –3/2 times the first vector.
17. The set is linearly dependent, by Theorem 9, because the list of vectors contains a zero vector.
18. The set is linearly dependent, by Theorem 8, because there are four vectors in the set but only two
entries in each vector.
19. The set is linearly independent because neither vector is a multiple of the other vector. [Two of the
entries in the first vector are – 4 times the corresponding entry in the second vector. But this multiple
does not work for the third entries.]
20. The set is linearly dependent, by Theorem 9, because the list of vectors contains a zero vector.
21. a. False. A homogeneous system always has the trivial solution. See the box before Example 2.
b. False. See the warning after Theorem 7.
c. True. See Fig. 3, after Theorem 8.
d. True. See the remark following Example 4.
22. a. True. See Theorem 7.
b. True. See Example 4.
c. False. For instance, the set consisting of
12
2 and 4
36
ª
ºªº
«
»«»
«
»«»
«
»«»
¬
¼¬¼
is linearly dependent. See the warning
after Theorem 8.
d. False. See Example 3(a).
23.
*0 00
,,
00 00 00
ªºªºªº
«»«»«»
¬¼¬¼¬¼

24.
**
0*
00
ª
º
«
»
«
»
«
»
¬
¼
25.
*0
000
and
00 00
00 00
ª
ºªº
«
»«»
«
»«»
«
»«»
«
»«»
«
»«»
¬
¼¬¼

26.
**
0*
00
000
ªº
«»
«»
«»
«»
«»
¬¼
. The columns must be linearly independent, by Theorem 7, because the first column is
not zero, the second column is not a multiple of the first, and the third column is not a linear
combination of the preceding two columns (because a
3
is not in Span{a
1
, a
2
}).
27. All four columns of the 6×4 matrix A must be pivot columns. Otherwise, the equation Ax = 0 would
have a free variable, in which case the columns of A would be linearly dependent.
28. If the columns of a 4×6 matrix A span R
4
, then A has a pivot in each row, by Theorem 4. Since each
pivot position is in a different column, A has four pivot columns.
54 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
29. A: any 3×2 matrix with one column a multiple of the other.
B: any 3×2 matrix with two nonzero columns such that neither column is a multiple of the other. In
this case the columns are linearly independent and so the equation Bx = 0 has only the trivial
solution.
30. a. n
b. The columns of A are linearly independent if and only if the equation Ax = 0 has only the trivial
solution. This happens if and only if Ax = 0 has no free variables, which in turn happens if and
only if every variable is a basic variable, that is, if and only if every column of A is a pivot
column.
31. Think of A = [a
1
a
2
a
3
]. The text points out that a
3
= a
1
+ a
2
. Rewrite this as a
1
+ a
2
a
3
= 0. As a
matrix equation, Ax = 0 for x = (1, 1, –1).
32. Think of A = [a
1
a
2
a
3
]. The text points out that a
1
– 3a
2
= a
3
. Rewrite this as a
1
– 3a
2
a
3
= 0. As
a matrix equation, Ax = 0 for x = (1, –3, –1).
33. True, by Theorem 7. (The Study Guide adds another justification.)
34. False. The vector v
1
could be the zero vector.
35. True, by Theorem 9.
36. False. Counterexample: Take v
1
and v
2
to be multiples of one vector. Take v
3
to be not a multiple of
that vector. For example,
12 3
121
1, 2, 0
120
ªº ª º ª º
«» « » « »
== =
«» « » « »
«» « » « »
¬¼ ¬ ¼ ¬ ¼
vv v
37. True. A linear dependence relation among v
1
, v
2
, v
3
may be extended to a linear dependence relation
among v
1
, v
2
, v
3
, v
4
by placing a zero weight on v
4
.
38. True. If the equation x
1
v
1
+ x
2
v
2
+ x
3
v
3
= 0 had a nontrivial solution (with at least one of x
1
, x
2
, x
3
nonzero), then so would the equation x
1
v
1
+ x
2
v
2
+ x
3
v
3
+ 0¸v
4
= 0. But that cannot happen because
{v
1
, v
2
, v
3
, v
4
} is linearly independent. So {v
1
, v
2
, v
3
} must be linearly independent. This problem can
also be solved using Exercise 37, if you know that the statement there is true.
39. If for all b the equation Ax = b has at most one solution, then take b = 0, and conclude that the
equation Ax = 0 has at most one solution. Then the trivial solution is the only solution, and so the
columns of A are linearly independent.
40. An m×n matrix with n pivot columns has a pivot in each column. So the equation Ax = b has no free
variables. If there is a solution, it must be unique.
1.7 Solutions 55
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
41. [M]
3410 74 3 4 10 7 4
5371115 029/329/3 2/325/3
~
435 21025/325/322/319/3
8723 415 011/311/344/377/3
A
−−− −
ªºªº
«»«»
−−−− −
«»«»
=
«»«»
−−
«»«»
−−
¬¼¬¼
3410 7 43410 7 4
029/329/3 2/3 25/3 029/329/3 2/3 25/3
~~
000196/29392/29000196/29392/29
000418/29836/29000 0 0
−−−−
ªºªº
«»«»
−−
«»«»
«»«»
−−
«»«»
¬¼¬¼
Use the pivot columns of A to form
34 7
5311
43 2
87 4
B
ª
º
«
»
−−
«
»
=
«
»
«
»
¬
¼
. Other choices are possible.
42. [M]
12 10 6 8 4 14 12 10 6 8 4 14
76457 9 0 1/61/21/314/35/6
~~
9999918 0 0 2 2 16 2
43108 1 0 0 0 0 36 0
8756 111 0 0 0 0 0 0
−− − −
ªºªº
«»«»
−− − −−
«»«»
«»«»
⋅⋅⋅
−− −
«»«»
−−− −
«»«»
«»«»
−−
¬¼¬¼
Use the pivot columns of A to form
12 10 6 4
7647
9999
4318
875 1
B
ª
º
«
»
−− −
«
»
«
»
=
«
»
−−−
«
»
«
»
¬
¼
. Other choices are possible.
43. [M] Make v any one of the columns of A that is not in B and row reduce the augmented matrix
[B v]. The calculations will show that the equation Bx = v is consistent, which means that v is a
linear combination of the columns of B. Thus, each column of A that is not a column of B is in the set
spanned by the columns of B.
44. [M] Calculations made as for Exercise 43 will show that each column of A that is not a column of B
is in the set spanned by the columns of B. Reason: The original matrix A has only four pivot
columns. If one or more columns of A are removed, the resulting matrix will have at most four pivot
columns. (Use exactly the same row operations on the new matrix that were used to reduce A to
echelon form.) If v is a column of A that is not in B, then row reduction of the augmented matrix
[B v] will display at most four pivot columns. Since B itself was constructed to have four pivot
columns, adjoining v cannot produce a fifth pivot column. Thus the first four columns of [B v] are
the pivot columns. This implies that the equation Bx = v has a solution.
Note:
At the end of Section 1.7, the Study Guide has another note to students about “Mastering Linear
Algebra Concepts.” The note describes how to organize a review sheet that will help students form a
mental image of linear independence. The note also lists typical misuses of terminology, in which an
adjective is applied to an inappropriate noun. (This is a major problem for my students.) I require my
students to prepare a review sheet as described in the Study Guide, and I try to make helpful comments on
their sheets. I am convinced, through personal observation and student surveys, that the students who
56 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
prepare many of these review sheets consistently perform better than other students. Hopefully, these
students will remember important concepts for some time beyond the final exam.
1.8 SOLUTIONS
Notes:
The key exercises are 17–20, 25 and 31. Exercise 20 is worth assigning even if you normally
assign only odd exercises. Exercise 25 (and 26) can be used to make a few comments about computer
graphics, even if you do not plan to cover Section 2.6. For Exercise 31, the Study Guide encourages
students not to look at the proof before trying hard to construct it. Then the Guide explains how to create
the proof.
Exercises 19 and 20 provide a natural segue into Section 1.9. I arrange to discuss the homework on
these exercises when I am ready to begin Section 1.9. The definition of the standard matrix in Section 1.9
follows naturally from the homework, and so I’ve covered the first page of Section 1.9 before students
realize we are working on new material.
The text does not provide much practice determining whether a transformation is linear, because the
time needed to develop this skill would have to be taken away from some other topic. If you want your
students to be able to do this, you may need to supplement Exercises 23, 24, 32 and 33.
If you skip the concepts of one-to-one and “onto” in Section 1.9, you can use the result of Exercise 31
to show that the coordinate mapping from a vector space onto R
n
(in Section 4.4) preserves linear
independence and dependence of sets of vectors. (See Example 6 in Section 4.4.)
1. T(u) = Au =
20 1 2
02 3 6
ªºªºªº
=
«»«»«»
−−
¬¼¬¼¬¼
, T(v) =
20 2
02 2
aa
bb
ª
ºª º ª º
=
«
»« » « »
¬
¼¬ ¼ ¬ ¼
2. T(u) = Au =
13 0 0 3 1
013 0 6 2
00139 3
ªºªºªº
«»«»«»
=
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
, T(v) =
13 0 0 3
013 0 3
0013 3
aa
bb
cc
ª
ºª º ª º
«
»« » « »
=
«
»« » « »
«
»« » « »
¬
¼¬ ¼ ¬ ¼
3.
[]
1032 1032 1032
3163~0133~0133
2211 0253 0013
A
−− −− −−
ªºªºªº
«»«»«»
=−−
«»«»«»
«»«»«»
−−− − −−
¬¼¬¼¬¼
b
10 3 2 1007 7
~0 1 3 3~0 1 0 6 6,
00 1 3 0011 1
−−
ªºªºªº
«»«»«»
−− =
«»«»«»
«»«»«»
¬¼¬¼¬¼
x
unique solution
4.
[]
1236 1236 1236
0134~0134~0134
2565 0107 0033
A
−− −− −−
ªºªºªº
«»«»«»
=−− −− −
«»«»«»
«»«»«»
−− −
¬¼¬¼¬¼
b
1236 10017 17
~0 1 3 4~0 1 0 7 7
0011 001 1 1
−− − −
ªºªºªº
«»«»«»
−− =
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
x
, unique solution
1.8 Solutions 57
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
5.
[]
1572 1572 1033
~~
37 52 0 12 1 0121
A
−−− −−
ªºªºªº
=
«»«»«»
−−
¬¼¬¼¬¼
b
Note that a solution is not
3
1
ªº
«»
¬¼
. To avoid this common error, write the equations:
13
23
33
21
xx
xx
+=
+=
and solve for the basic variables:
13
23
3
33
12
is free
xx
xx
x
=
°=
®
°
¯
The general solution is
13
23 3
33
33 3 3
12 1 2
01
xx
xxx
xx
−−
ªºª º
ª
ºªº
«»« »
«
»«»
===+
«»« »
«
»«»
«»« »
«
»«»
¬
¼¬¼
¬¼¬ ¼
x
. For a particular solution, one might
choose x
3
= 0 and
3
1
0
ª
º
«
»
=
«
»
«
»
¬
¼
x
.
6.
[]
1321 1321 1321 10810
3886 0123 0123 0123
~~~
0123 0123 0000 0000
10810 0369 0000 0000
A
−−
ªºªºªºªº
«»«»«»«»
«»«»«»«»
=
«»«»«»«»
«»«»«»«»
¬¼¬¼¬¼¬¼
b
Write the
equations:
13
23
810
23
xx
xx
+=
+=
and solve for the basic variables:
13
23
3
10 8
32
is free
xx
xx
x
=
°=
®
°
¯
The general solution is
13
23 3
33
10 8 10 8
32 3 2
01
xx
xx x
xx
−−
ªºª º
ª
ºªº
«»« »
«
»«»
===+
«»« »
«
»«»
«»« »
«
»«»
¬
¼¬¼¬¼¬ ¼
x
. For a particular solution, one might
choose x
3
= 0 and
10
3
0
ª
º
«
»
=
«
»
«
»
¬
¼
x
.
7. The value of a is 5. The domain of T is R
5
, because a 6×5 matrix has 5 columns and for Ax to be
defined, x must be in R
5
. The value of b is 6. The codomain of T is R
6
, because Ax is a linear
combination of the columns of A, and each column of A is in R
6
.
8. The matrix A must have 7 rows and 5 columns. For the domain of T to be R
5
, A must have five
columns so that Ax is defined for x in R
5
. For the codomain of T to be R
7
, the columns of A must
have seven entries (in which case A must have seven rows), because Ax is a linear combination of the
columns of A.
9. Solve Ax = 0:
13550 13550 13550
01350~01350~01350
24440 02660 00040
−− −− −−
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
−− −
¬¼¬¼¬¼
58 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
10 400
~0 1 3 0 0
00 010
ªº
«»
«»
«»
¬¼
13
23
4
40
30
0
xx
xx
x
=
=
=
,
13
23
3
4
4
3
is free
= 0
xx
xx
x
x
=
°
=
°
®
°
°
¯
x =
13
23
3
33
4
44
33
1
00
xx
xxx
xx
x
ªºªº ªº
«»«» «»
«»«» «»
==
«»«» «»
«»«» «»
¬¼ ¬¼
¬¼
10. Solve Ax = 0.
3210 60 10 2 40 102 40
10 2 40 3210 60 024 60
~~
01 2 30 012 30 012 30
1410 80 1410 80 048120
−−
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
102 40 102 40
012 30 012 30
~~
024 60 000 00
048120 000 00
−−
ªºªº
«»«»
«»«»
«»«»
«»«»
¬¼¬¼
134
234
240
230
xxx
xxx
+=
++=
134
234
3
4
24
23
is free
is free
xxx
xxx
x
x
=+
°
=−−
°
®
°
°
¯
4
3
4
3
34
3
4
4
224
3
223
010
001
x
x
x
xxx
x
x
−−
ªº
ª
ºªºªº
«»
«
»«»«»
−−
«»
«
»«»«»
=+=+
«»
«
»«»«»
«»
«
»«»«»
¬
¼¬¼¬¼
¬¼
x
11. Is the system represented by [A b] consistent? Yes, as the following calculation shows.
13551 13551 13551
01351~01351~01351
24440 02662 00040
−−− −− −−
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
−− −
¬¼¬¼¬¼
The system is consistent, so b is in the range of the transformation
A
xx6
.
12. Is the system represented by [A b] consistent?
3210 6 1 10 2 4 3 102 4 3
10 2 4 3 3210 6 1 024 6 10
~~
01 2 3 1 012 3 1 012 3 1
1410 8 4 1410 8 4 04812 1
−− − −
ªºªºªº
«»«»«»
−−−−
«»«»«»
«»«»«»
−− −
«»«»«»
¬¼¬¼¬¼
102 4 3 1 0 2 4 3
012 3 1 0 1 2 3 1
~~
024 6 10 0 0 0 0 8
04812 1 0 0 0 0 5
−−
ªºªº
«»«»
−−
«»«»
«»«»
−−
«»«»
¬¼¬¼
1.8 Solutions 59
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
The system is inconsistent, so b is not in the range of the transformation
A
xx6
.
13. 14.
A reflection through the origin. A scaling by the factor 2.
The transformation in Exercise 13 may also be described as a rotation of π radians about the origin or
a rotation of –π radians about the origin.
15. 16.
A reflection through the line x
2
= x
1
. A scaling by a factor of 2 and a projection onto the x
2
axis.
17. T(2u) = 2T(u) =
48
212
ªº ªº
=
«» «»
¬¼ ¬¼
, T(3v) = 3T(v) =
13
339
−−
ª
ºªº
=
«
»«»
¬
¼¬¼
, and
T(2u + 3v) = 2T(u) + 3T(v) =
835
2911
ªº ª º ªº
+=
«» « » «»
¬¼ ¬ ¼ ¬¼
.
18. Draw a line through w parallel to v, and draw a line through w parallel to u. See the left part of the
figure below. From this, estimate that w = u + 2v. Since T is linear, T(w) = T(u) + 2T(v). Locate T(u)
and 2T(v) as in the right part of the figure and form the associated parallelogram to locate T(w).
x
1
x
2
x
1
x
2
uw
v2v
T(v)
2T(v)
T(u)
T(w)
60 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
19. All we know are the images of e
1
and e
2
and the fact that T is linear. The key idea is to write
x =
12
510
5353
301
.
==
ªº ªº ªº
«» «» «»
¬¼ ¬¼ ¬¼
ee
Then, from the linearity of T, write
T(x) = T(5e
1
– 3e
2
) = 5T(e
1
) – 3T(e
2
) = 5y
1
– 3y
2
=
2113
53 .
567
=
ª
ºªºªº
«
»«»«»
¬
¼¬¼¬¼
To find the image of
1
2
x
x
ª
º
«
»
¬
¼, observe that
1
121122
2
10
01
xxx xx
x
ªº ªº ªº
== + =+
«» «» «»
¬¼ ¬¼
¬¼
xee
. Then
T(x) = T(x
1
e
1
+ x
2
e
2
) = x
1
T(e
1
) + x
2
T(e
2
) =
12
12
12
221
56
56
xx
xx xx
ª
ºªº ª º
+=
«
»
«» « »
+
¬¼ ¬ ¼
¬
¼
20. Use the basic definition of Ax to construct A. Write
[]
1
11 2 2 1 2
2
37 37
() ,
52 52
x
Txx A
x
−−
ªº
ª
ºªº
=+ = = =
«»
«
»«»
−−
¬
¼¬¼
¬¼
xvvvv x
21. a. True. Functions from R
n
to R
m
are defined before Fig. 2. A linear transformation is a function
with certain properties.
b. False. The domain is R
5
. See the paragraph before Example 1.
c. False. The range is the set of all linear combinations of the columns of A. See the paragraph
before Example 1.
d. False. See the paragraph after the definition of a linear transformation.
e. True. See the paragraph following the box that contains equation (4).
22. a. True. See the subsection on Matrix Transformations.
b. True. See the subsection on Linear Transformations.
c. False. The question is an existence question. See the remark about Example 1(d), following the
solution of Example 1.
d. True. See the discussion following the definition of a linear transformation.
e. True. T(0) = 0
.
See the box after the definition of a linear transformation.
23. a. When b = 0, f (x) = mx. In this case, for all x,y in R and all scalars c and d,
f (cx + dy) = m(cx + dy) = mcx + mdy = c(mx) + d(my) = cf (x) + d f (y)
This shows that f is linear.
b. When f (x) = mx + b, with b nonzero, f(0) = m(0) = b = b 0. This shows that f is not linear,
because every linear transformation maps the zero vector in its domain into the zero vector in the
codomain. (In this case, both zero vectors are just the number 0.) Another argument, for instance,
would be to calculate f (2x) = m(2x) + b and 2f (x) = 2mx + 2b. If b is nonzero, then f (2x) is not
equal to 2f (x) and so f is not a linear transformation.
c. In calculus, f is called a “linear function” because the graph of f is a line.
24. Let T(x) = Ax + b for x in R
n
. If b is not zero, T(0) = A0 + b = b
0. Actually, T fails both
properties
of a linear transformation. For instance, T(2x) = A(2x) + b = 2Ax + b, which is not the same as 2T(x)
= 2(Ax + b) = 2Ax + 2b. Also,
T(x + y) = A(x + y) + b = Ax + Ay + b
1.8 Solutions 61
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
which is not the same as
T(x) + T(y) = Ax + b + Ay + b
25. Any point x on the line through p in the direction of v satisfies the parametric equation
x = p + tv for some value of t. By linearity, the image T(x) satisfies the parametric equation
T(x) = T(p + tv) = T(p) + tT(v)
(*)
If T(v) = 0, then T(x) = T(p) for all values of t, and the image of the original line is just a single
point. Otherwise, (*) is the parametric equation of a line through T(p) in the direction of T(v).
26. a. From the figure following Exercise 22 in Section 1.5, the line through p and q is in the direction
of q p, and so the equation of the line is x = p + t(q p) = p + tqtp = (1 t)p + tq.
b. Consider x = (1 – t)p + tq for t such that 0 < t < 1. Then, by linearity of T,
T(x) = T((1 – t)p + tq) = (1 – t)T(p) + tT(q) 0 < t < 1
(*)
If T(p) and T(q) are distinct, then (*) is the equation for the line segment between T(p) and T(q),
as shown in part (a) Otherwise, the set of images is just the single point T(p), because
(1 t)T(p) + tT(q) =(1 – t)T(p) + tT(p) = T(p)
27. Any point x on the plane P satisfies the parametric equation x = su + tv for some values of s and t.
By linearity, the image T(x) satisfies the parametric equation
T(x) = sT(u) + tT(v) (s, t in R)
The set of images is just Span{T(u), T(v)}. If T(u) and T(v) are linearly independent, Span{T(u),
T(v)} is a plane through T(u), T(v), and 0. If T(u) and T(v) are linearly dependent and not both zero,
then Span{T(u), T(v)} is a line through 0. If T(u) = T(v) = 0, then Span{T(u), T(v)} is {0}.
28. Consider a point x in the parallelogram determined by u and v, say x = au + bv for 0 < a < 1,
0 < b < 1. By linearity of T, the image of x is
T(x) = T(au + bv) = aT(u) + bT(v), for 0 < a < 1, 0 < b < 1
This image point lies in the parallelogram determined by T(u) and T(v). Special “degenerate” cases
arise when T(u) and T(v) are linearly dependent. If one of the images is not zero, then the
“parallelogram” is actually the line segment from 0 to T(u) + T(v). If both T(u) and T(v) are zero,
then the parallelogram is just {0}. Another possibility is that even u and v are linearly dependent, in
which case the original parallelogram is degenerate (either a line segment or the zero vector). In this
case, the set of images must be degenerate, too.
62 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
29.
30. Given any x in R
n
, there are constants c
1
, …, c
p
such that x = c
1
v
1
+ c
p
v
p
, because v
1
, …, v
p
span
R
n
. Then, from property (5) of a linear transformation,
T(x) = c
1
T(v
1
) + + c
p
T(v
p
) = c
1
0 + + c
p
0 = 0
31. (The Study Guide has a more detailed discussion of the proof.) Suppose that {v
1
, v
2
, v
3
} is linearly
dependent. Then there exist scalars c
1
, c
2
, c
3
, not all zero, such that
c
1
v
1
+ c
2
v
2
+ c
3
v
3
= 0
Then T(c
1
v
1
+ c
2
v
2
+ c
3
v
3
) = T(0) = 0. Since T is linear,
c
1
T(v
1
) + c
2
T(v
2
) + c
3
T(v
3
) = 0
Since not all the weights are zero, {T(v
1
), T(v
2
), T(v
3
)} is a linearly dependent set.
32. Take any vector (x
1
, x
2
) with x
2
0, and use a negative scalar. For instance, T(0, 1) = (–2, –4), but
T(–1 (0, 1)) = T(0, –1) = (–2, 4)
(–1) T(0, 1).
33. One possibility is to show that T does not map the zero vector into the zero vector, something that
every linear transformation does do. T(0, 0) = (0, –3, 0).
34. Take u and v in R
3
and let c and d be scalars. Then
cu + dv = (cu
1
+ dv
1
, cu
2
+ dv
2
, cu
3
+ dv
3
). The transformation T is linear because
T(cu + dv) = (cu
1
+ dv
1
, cu
2
+ dv
2
, – (cu
3
+ dv
3
)) = (cu
1
+ dv
1
, cu
2
+ dv
2
, cu
3
dv
3
)
= (cu
1
, cu
2
, –cu
3
) + (dv
1
, dv
2
, –dv
3
) = c(u
1
, u
2
, –u
3
) + d(v
1
, v
2
, –v
3
)
= cT(u) + dT(v)
35. Take u and v in R
3
and let c and d be scalars. Then
cu + dv = (cu
1
+ dv
1
, cu
2
+ dv
2
, cu
3
+ dv
3
). The transformation T is linear because
T(cu + dv) = (cu
1
+ dv
1
, 0, cu
3
+ dv
3
) = (cu
1
, 0, cu
3
) + (dv
1
, 0, dv
3
)
= c(u
1
, 0, u
3
) + d(v
1
, 0, v
3
)
= cT(u) + dT(v)
36. Suppose that {u, v} is a linearly independent set in R
n
and yet T(u) and T(v) are linearly dependent.
Then there exist weights c
1
, c
2
, not both zero, such that c
1
T(u) + c
2
T(v) = 0 . Because T is linear,
T(c
1
u + c
2
v) = 0. That is, the vector x = c
1
u + c
2
v satisfies T(x) = 0. Furthermore, x cannot be the
zero vector, since that would mean that a nontrivial linear combination of u and v is zero, which is
impossible because u and v are linearly independent. Thus, the equation T(x) = 0 has a nontrivial
solution.
1.8 Solutions 63
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
37. [M]
23 5 5 0 1010 0
77 0 0 0 0110 0
~,
34 1 3 0 0001 0
93 6 4 0 0000 0
ªºªº
«»«»
«»«»
«»«»
«»«»
−−
¬¼¬¼
13
23
3
4
is free
0
xx
xx
x
x
=
°=
°
®
°
°=
¯
3
1
1
1
0
x
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
x
38. [M]
34700 10010
58740 01010
~
68640 00110
97200 00000
ªºªº
«»«»
«»«»
«»«»
«»«»
−−
¬¼¬¼
,
14
24
34
4
is free
xx
xx
xx
x
=
°=
°
®=
°
°
¯
4
1
1
1
1
x
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
x
39. [M]
23 5 5 8 1010 1
77 0 0 7 01102
~,
34 1 3 5 00010
93 6 4 3 00000
ªºªº
«»«»
«»«»
«»«»
«»«»
−−
¬¼¬¼
yes, b is in the range of the transformation,
because the augmented matrix shows a consistent system. In fact,
the general solution is
13
23
3
4
1–
2–
is free
0
xx
xx
x
x
=
°=
°
®
°
°=
¯
; when x
3
= 0 a solution is
1
2
0
0
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
x
.
40. [M]
34704 10011
58744 01012
~
68644 00111
97207 00000
ªºªº
«»«»
−−
«»«»
«»«»
−−
«»«»
−− −
¬¼¬¼
, yes, b is in the range of the transformation,
because the augmented matrix shows a consistent system. In fact,
the general solution is
14
24
34
4
1
2
1
is free
xx
xx
xx
x
=
°=
°
®=
°
°
¯
; when x
4
= 0 a solution is
1
2
1
0
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
x
.
Notes:
At the end of Section 1.8, the Study Guide provides a list of equations, figures, examples, and
connections with concepts that will strengthen a student’s understanding of linear transformations. I
encourage my students to continue the construction of review sheets similar to those for “span” and
“linear independence,” but I refrain from collecting these sheets. At some point the students have to
assume the responsibility for mastering this material.
If your students are using MATLAB or another matrix program, you might insert the definition of
matrix multiplication after this section, and then assign a project that uses random matrices to explore
properties of matrix multiplication. See Exercises 34–36 in Section 2.1. Meanwhile, in class you can
continue with your plans for finishing Chapter 1. When you get to Section 2.1, you won’t have much to
do. The Study Guide’s MATLAB note for Section 2.1 contains the matrix notation students will need for
a project on matrix multiplication. The appendices in the Study Guide have the corresponding material for
Mathematica, Maple, and the TI-83+/84+/89 calculators.
.
64 CHAPTER 1 Linear Equations in Line
a
Copyright © 2012 Pea
r
1.9 SOLUTIONS
Notes
: This section is optional if yo
u
instructors will want to cover at least
T
illustrate a fast way to solve Exercises
basis.
The purpose of introducing one-to-
4.4) and to acquaint math majors with t
h
digression, and some instructors prefer
use the result of Exercise 31 in Section
R
n
(in Section 4.4) preserves linear ind
e
Section 4.4.) The notions of one-to-one
but can be omitted there if desired
Exercises 25–28 and 31–36 offer
important links to earlier material.
1. A = [T(e
1
) T(e
2
)] =
35
12
30
10
ªº
«»
«»
«»
«»
«»
¬¼
2. A = [T(e
1
) T(e
2
) T(e
3
)] =
12
49
ª
«
¬
3. T(e
1
) = e
1
– 3e
2
=
1
3
ªº
«»
¬¼
, T(e
2
) = e
2
,
A
4. T(e
1
) = e
1
, T(e
2
) = e
2
+ 2e
1
=
2
1
ªº
«»
¬¼
,
A
5. T(e
1
) = e
2
, T(e
2
) = –e
1
. A =
[
2
e
e
6. T(e
1
) = e
2
, T(e
2
) = –e
1
. A =
[
2
e
e
7. Follow what happens to e
1
and e
2
.
S
circle in the plane, it rotates throug
h
point on the unit circle that lies in t
h
on the line
21
xx= (that is, yx= i
n
The point (–1,–1) is on the ine
2
x
=
from the origin is 2. So the rotati
o
(–1/ 2, –1/ 2)
. Then this image r
e
axis to
(–1/ 2,1/ 2)
. Similarly, e
2
the unit circle that lies in the secon
d
ar Algebra
r
son Education, Inc. Publishing as Addison-Wesley.
u plan to treat linear transformations only lightl
y
T
heorem 10 and a few geometric examples. Exercis
e
17–22 without explicitly computing the images of
one and onto is to prepare for the term isomorphis
m
h
ese terms. Mastery of these concepts would require
to omit these topics (and Exercises 25–40). In this c
a
1.8 to show that the coordinate mapping from a vect
o
e
pendence and dependence of sets of vectors. (See
E
and onto appear in the Invertible Matrix Theorem (
S
fairly easy writing practice. Exercises 31, 32, and
3
8
º
»
¼
A
=
10
31
ªº
«»
¬¼
A
=
12
01
ªº
«»
¬¼
]
1
01
10
=
ªº
«»
¬¼
e
]
1
01
10
=
ªº
«»
¬¼
e
S
ince e
1
is on the unit
h
–3 /4
radians into a
h
e third quadrant and
n
more familiar notation).
1
x
=
, but its distance
o
nal image of e
1
is
e
flects in the horizontal
2
rotates into a point on
d
quadrant and on the
y
, but many
e
s 15 and 16
the standard
m
(in Section
a substantial
a
se, you can
o
r space onto
E
xample 6 in
S
ection 2.3),
35 provide
1.9 Solutions 65
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
line
21
xx=, namely,
(1 / 2 , –1 / 2 )
. Then this image reflects in the horizontal axis to
(1 / 2 ,1 / 2 )
. When the two calculations described above are written in vertical vector notation, the
transformation’s standard matrix [T(e
1
) T(e
2
)] is easily seen:
12
1/ 2 1/ 2 1/ 2 1/ 2
,
1/ 2 1/ 2 1/ 2 1/ 2
ªºªº ªºªº
−−
→→ →→
«»«» «»«»
−−
«»«» «»«»
¬¼¬¼ ¬¼¬¼
ee
,
1/ 2 1/ 2
1/ 2 1/ 2
A
ª
º
=
«
»
«
»
¬
¼
8. The horizontal shear maps e
1
into e
1
, and then the reflection in the line x
2
= –x
1
maps e
1
into –e
2
. (See
Table 1.) The horizontal shear maps e
2
into e
2
into e
2
+ 2e
1
. To find the image of e
2
+ 2e
1
when it is
reflected in the line x
2
= –x
1
, use the fact that such a reflection is a linear transformation. So, the
image of e
2
+ 2e
1
is the same linear combination of the images of e
2
and e
1
, namely,
e
1
+ 2(–e
2
) = – e
1
– 2e
2
. To summarize,
11 2 2 2 1 1 2
01
and 2 2 , so 12
A
ª
º
→→− → +→− − =
«
»
−−
¬
¼
ee e e e e ee
9.
11 2 2 2 1
,
01
and so 10
A
→→ →− →−
ª
º
=
«
»
¬
¼
ee e e e e
10.
[]
11 2 2 2 1 2 1
01
and , so 10
A
ª
º
→→ ==
«
»
¬
¼
eee e e e e e
11. The transformation T described maps
11 1
→→
ee e
and maps
222
.
→− →−
eee
A rotation through
π
radians also maps e
1
into –e
1
and maps e
2
into –e
2
. Since a linear transformation is completely
determined by what it does to the columns of the identity matrix, the rotation transformation has the
same effect as T on every vector in
2
.
12. The transformation T in Exercise 10 maps
11 2
→→
eee
and maps
221
→− →−
eee
. A rotation about
the origin through
/
2
π
radians also maps e
1
into e
2
and maps e
2
into –e
1
. Since a linear
transformation is completely determined by what it does to the columns of the identity matrix, the
rotation transformation has the same effect as T on every vector in
2
.
13. Since (2, 1)=2 e
1
+ e
2
, the image of (2, 1) under T is 2T(e
1
) + T(e
2
), by linearity of T. On the figure in
the exercise, locate 2T(e
1
) and use it with T(e
2
) to form the parallelogram shown below.
14. Since
[]
12 11221 2
() 2
TA xx== = + =xxaaxaaaa
, when x = (1, –2), the image of x is located by
forming the parallelogram shown below.
66 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
15. By inspection,
112
213
323
240 24
101
013 3
xxx
xxx
xxx
−−
ªºª ºªº
«»« »
«»
=
«»« »
«»
«»« »«»
−−+
¬¼¬¼¬¼
16. By inspection,
12
1
12
2
2
32 32
14 4
01
xx
xxx
xx
−−
ªº ª º
ªº
«» « »
=+
«»
«» « »
¬¼
«» « »
¬¼ ¬ ¼
17. To express T(x) as Ax , write T(x) and x as column vectors, and then fill in the entries in A by
inspection, as done in Exercises 15 and 16. Note that since T(x) and x have four entries, A must be a
4×4 matrix.
T(x) =
12 1 1
22
24 3 3
24 4 4
2120 0
0000 0
2020 1
010 1
xx x x
xx
A
xx x x
xx x x
+
ªº ªº ªº
ªºª º
«» «» «»
«»« »
«» «» «»
«»« »
==
«» «» «»
«»« »
+
«» «» «»
«»« »
¬¼¬ ¼
¬¼ ¬¼ ¬¼
18. As in Exercise 17, write T(x) and x as column vectors. Since x has 2 entries, A has 2 columns. Since
T(x) has 4 entries, A has 4 rows.
12
11
12 2 2
1
414
000
313
10
xx
xx
A
xx x x
x
+
ªº
ªº ª º
«»
«» « »
ªº ªº
«»
«» « »
==
«» «»
«»
«» « »
¬¼ ¬¼
«»
«» « »
¬¼ ¬ ¼
¬¼
19. Since T(x) has 2 entries, A has 2 rows. Since x has 3 entries, A has 3 columns.
11
12 3
22
23
33
54 154
6016
xx
xx x Ax x
xx xx
ªº ªº
+
ªº
ªºª º
«» «»
==
«»
«»« »
«» «»
¬¼¬ ¼
¬¼
«» «»
¬¼ ¬¼
20. Since T(x) has 1 entry, A has 1 row. Since x has 4 entries, A has 4 columns.
11
22
134
33
44
[3 4 2 ] [ ] [3 0 4 2]
xx
xx
xxx A xx
xx
ªº ªº
«» «»
«» «»
+==
«» «»
«» «»
¬¼ ¬¼
1.9 Solutions 67
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
21. T(x) =
12 1 1
12 2 2
11
45 45
xx x x
A
xx x x
+
ªºªº ªºªº ª º
==
«»«» «»
«» « »
+¬¼ ¬ ¼
¬¼¬¼ ¬¼
. To solve T(x) = 3
8
ª
º
«
»
¬
¼
, row reduce the augmented
matrix: 113 11 3 10 7 7
~~,
458 01 4 01 4 4
ªºª ºª ºªº
=
«»« »« »«»
−−
¬¼¬ ¼¬ ¼¬¼
x
.
22. T(x) =
12
11
12
22
12
221
331
23 23
xx xx
xx A xx
xx
−−
ªºªºªº
ªº ªº
«»«»«»
+= =
«» «»
«»«»«»
¬¼ ¬¼
«»«»«»
−−
¬¼¬¼¬¼
. To solve T(x) =
0
1
4
ª
º
«
»
«
»
«
»
¬
¼
, row reduce the
augmented matrix:
210 2 10 210 210 202 101
311~0121~012~012~012~012
234 0 24 024 000 000 000
−−
ªºª ºªºªºªºªº
«»« »«»«»«»«»
−− −
«»« »«»«»«»«»
«»« »«»«»«»«»
−− − −
¬¼¬ ¼¬¼¬¼¬¼¬¼
,
1.
2
ªº
=«»
¬¼
x
23. a. True. See Theorem 10.
b. True. See Example 3.
c. False. See the paragraph before Table 1.
d. False. See the definition of onto. Any function from R
n
to R
m
maps each vector onto another
vector.
e. False. See Example 5.
24. a. False. See Theorem 12.
b. True. See Theorem 10.
c. True. See Theorem 10.
d. False. See the definition of one-to-one. Any function from R
n
to R
m
maps a vector onto a single
(unique) vector.
e. False. See Table 3.
25. A row interchange and a row replacement on the standard matrix A of the transformation T in
Exercise 17 produce
120 0
010 1
000 3
000 0
ª
º
«
»
«
»
«
»
«
»
¬
¼
. This matrix shows that A has only three pivot positions, so
the equation Ax = 0 has a nontrivial solution. By Theorem 11, the transformation T is not one-to-one.
Also, since A does not have a pivot in each row, the columns of A do not span R
4
. By Theorem 12, T
does not map R
4
onto R
4
.
26. The standard matrix A of the transformation T in Exercise 2 is 2×3. Its columns are linearly
dependent because A has more columns than rows. So T is not one-to-one, by Theorem 12. Also, A is
row equivalent to 12 3
017 20
ªº
«»
¬¼
, which shows that the rows of A span R
2
. By Theorem 12, T maps
R
3
onto R
2
.
68 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
27. The standard matrix A of the transformation T in Exercise 19 is 154
016
ª
º
«
»
¬
¼
. The columns of A
are linearly dependent because A has more columns than rows. So T is not one-to-one, by Theorem
12. Also, A has a pivot in each row, so the rows of A span R
2
. By Theorem 12, T maps R
3
onto R
2
.
28. The standard matrix A of the transformation T in Exercise 14 has linearly independent columns,
because the figure in that exercise shows that a
1
and a
2
are not multiples. So T is one-to-one, by
Theorem 12. Also, A must have a pivot in each column because the equation Ax = 0 has no free
variables. Thus, the echelon form of A is
*
0
.
ª
º
«
»
¬
¼
Since A has a pivot in each row, the columns of A
span R
2
. So T maps R
2
onto R
2
. An alternate argument for the second part is to observe directly from
the figure in Exercise 14 that a
1
and a
2
span R
2
. This is more or less evident, based on experience
with grids such as those in Figure 8 and Exercise 7 of Section 1.3.
29. By Theorem 12, the columns of the standard matrix A must be linearly independent and hence the
equation Ax = 0 has no free variables. So each column of A must be a pivot column:
**
0*
~.
00
000
A
ªº
«»
«»
«»
«»
«»
¬¼
Note that T cannot be onto because of the shape of A.
30. By Theorem 12, the columns of the standard matrix A must span
3
. By Theorem 4, the matrix must
have a pivot in each row. There are four possibilities for the echelon form:
*** *** *** 0 **
0**,0**,00*,00*
00 * 000 000 000
ªºªºªºªº
«»«»«»«»
«»«»«»«»
«»«»«»«»
¬¼¬¼¬¼¬¼
 
  

Note that T cannot be one-to-one because of the shape of A.
31. T is one-to-one if and only if A has n pivot columns.” By Theorem 12(b), T is one-to-one if and only
if the columns of A are linearly independent. And from the statement in Exercise 30 in Section 1.7,
the columns of A are linearly independent if and only if A has n pivot columns.
32. The transformation T maps R
n
onto R
m
if and only if the columns of A span R
m
, by Theorem 12. This
happens if and only if A has a pivot position in each row, by Theorem 4 in Section 1.4. Since A has m
rows, this happens if and only if A has m pivot columns. Thus, “T maps R
n
onto R
m
if and only A has
m pivot columns.”
33. Define :
nm
T
 by T(x) = Bx for some m×n matrix B, and let A be the standard matrix for T.
By definition, A = [T(e
1
) T(e
n
)], where e
j
is the jth column of I
n
. However, by matrix-vector
multiplication, T(e
j
) = Be
j
= b
j
, the jth column of B. So A = [b
1
b
n
] = B.
34. Take u and v in R
p
and let c and d be scalars. Then
T(S(cu + dv)) = T(cS(u) + d
S(v)) because S is linear
= c
T(S(u)) + dT(S(v)) because T is linear
This calculation shows that the mapping x T(S(x)) is linear. See equation (4) in Section 1.8.
1.10 Solutions 69
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
35. If :
nm
T
 maps
n
onto
m
, then its standard matrix A has a pivot in each row, by Theorem
12 and by Theorem 4 in Section 1.4. So A must have at least as many columns as rows. That is, m <
n. When T is one-to-one, A must have a pivot in each column, by Theorem 12, so m > n.
36. The transformation T maps R
n
onto R
m
if and only if for each y in R
m
there exists an x in R
n
such that
y = T(x).
37. [M]
5656 1001
8338 0101
~~
29512 0011
32712 0000
−−
ªºªº
«»«»
−−
«»«»
⋅⋅⋅
«»«»
−−
«»«»
−−
¬¼¬¼
. There is no pivot in the fourth column of
the standard matrix A, so the equation Ax = 0 has a nontrivial solution. By Theorem 11, the
transformation T is not one-to-one. (For a shorter argument, use the result of Exercise 31.)
38. [M]
7599 1000
5644 0100
~~
4807 0010
6665 0001
ªºªº
«»«»
«»«»
⋅⋅⋅
«»«»
«»«»
−−
¬¼¬¼
. Yes. There is a pivot in every column of the
standard matrix A, so the equation Ax = 0 has only the trivial solution. By Theorem 11, the trans-
formation T is one-to-one. (For a shorter argument, use the result of Exercise 31.)
39. [M]
47375 10050
685128 01010
~~
710 8 914 0 0 1 2 0
35426 00001
56673 00000
ªºªº
«»«»
−−
«»«»
«»«»
⋅⋅⋅
−−− −
«»«»
−−
«»«»
«»«»
−−
¬¼¬¼
. There is not a pivot in every row,
so the columns of the standard matrix do not span R
5
. By Theorem 12, the transformation T does not
map R
5
onto R
5
.
40. [M]
943 5 6 1 10000
14 15 7 5 4 0 1 0 0 0
~~
861259 00100
56498 00010
13 14 15 3 11 0 0 0 0 1
ªºªº
«»«»
−−
«»«»
«»«»
⋅⋅⋅
−− −−
«»«»
−−−
«»«»
«»«»
¬¼¬¼
. There is a pivot in every row, so the
columns of the standard matrix span R
5
. By Theorem 12, the transformation T maps R
5
onto R
5
.
1.10 SOLUTIONS
1. a. If x
1
is the number of servings of Cheerios and x
2
is the number of servings of 100% Natural
Cereal, then x
1
and x
2
should satisfy
12
nutrients nutrients quantities
per serving per serving of of nutrients
of Cheerios 100% Natural required
xx
ªºª ºª º
+=
«»« »« »
«»« »« »
¬¼¬ ¼¬ ¼
70 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
That is,
12
110 130 295
439
20 18 48
258
xx
ªº ªºªº
«» «»«»
«» «»«»
+=
«» «»«»
«» «»«»
«» «»«»
¬¼ ¬¼¬¼
b. The equivalent matrix equation is
1
2
110 130 295
43 9
20 18 48
25 8
x
x
ª
ºªº
«
»«»
ªº
«
»«»
=
«»
«
»«»
¬¼
«
»«»
«
»«»
¬
¼¬¼
. To solve this, row reduce the
augmented matrix for this equation.
110 130 295 2 5 8 1 2.5 4
439 439 439
~~
20 18 48 20 18 48 10 9 24
258110130295110130295
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
12.5 4 12.54 101.5
077011011
~~~
01616000000
0145145 000 000
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
−−
«»«»«»
−−
«»«»«»
¬¼¬¼¬¼
The desired nutrients are provided by 1.5 servings of Cheerios together with 1 serving of 100%
Natural Cereal.
2. Set up nutrient vectors for one serving of Shredded Wheat (SW) and Kellogg's Crispix (Crp):
Nutrients: SW Crp
calories 160 110
protein 5 2
fiber 6 .1
fat 1 .4
ªºªº
«»«»
«»«»
«»«»
«»«»
¬¼¬¼
.
a. Let
[]
160 110
52 3
SW Crp ,
6.1 2
1.4
B
ªº
«»
ª
º
«»
== =
«
»
«»
¬
¼
«»
¬¼
u
.
Then Bu lists the amounts of calories, protein, carbohydrate, and fat in a mixture of three servings
of Shredded Wheat and two servings of Crispix.
b. [M] Let u
1
and u
2
be the number of servings of Shredded Wheat and Crispix, respectively. Can
these numbers satisfy the equation
1
2
120
3.2
2.46
.64
Bu
u
ª
º
«
»
ªº
«
»
=
«»
«
»
¬¼
«
»
¬
¼
? To find out, row reduce the augmented
matrix
1.10 Solutions 71
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
160 110 130 1 .4 .64 1 .4 .64 1 .4 .64 1 0 .4
523.20 0 004627.601.601.6
~~~~
6.12.4602.31.3802.31.38000000
1.4.64 0 4627.6 0 0 0 00 0 000
ªºªºªºªºªº
«»«»«»«»«»
«»«»«»«»«»
«»«»«»«»«»
−− −−
«»«»«»«»«»
¬¼¬¼¬¼¬¼¬¼
Since the system is consistent, it is possible for a mixture of the two creals to provide the desired
nutrients. The mixture is .4 servings of Shredded Wheat and .6 servings of Crispix.
3. a. [M] Let x
1
, x
2
, and x
3
be the number of servings ofAnnies’s Mac and Cheese, broccoli, and
chicken, respectively, needed for the lunch. The values of x
1
, x
2
, and x
3
should satisfy
123
nutrients nutrients nutrients quantities
per serving per serving per serving of nutrients
of Mac and Cheese of broccoli of chicken required
xxx
ªºªºªºªº
++=
«»«»«»«»
«»«»«»«»
¬¼¬¼¬¼¬¼
From the given data,
123
270 51 70 400
10 5.4 15 30
25.2 010
xxx
ªº ªº ªºªº
«» «» «»«»
++=
«» «» «»«»
«» «» «»«»
¬¼ ¬¼ ¬¼¬¼
To solve, row reduce the corresponding augmented matrix:
270 51 70 400 1 0 0 .99
10 5.4 15 30 ~ ... ~ 0 1 0 1.54
25.2 0 10 001 .79
ªºªº
«»«»
«»«»
«»«»
¬¼¬¼
.99 servings of Mac and Cheese
1.54 servings of broccoli
.74 servings of chicken
ªºª º
«»« »
=
«»« »
«»« »
¬¼¬ ¼
x
b. [M] Changing from Annie’s Mac and Cheese to Annie’s Whole Wheat Shells and White Cheddar
changes the vector equation to
123
260 51 70 400
95.41530
55.2 010
xxx
ªº ªº ªºªº
«» «» «»«»
++=
«» «» «»«»
«» «» «»«»
¬¼ ¬¼ ¬¼¬¼
To solve, row reduce the corresponding augmented matrix:
260 51 70 400 1 0 0 1.09
95.415 30~...~010 .88
55.2 0 10 00 11.03
ªºªº
«»«»
«»«»
«»«»
¬¼¬¼
1.09 servings of Shells
.88 servings of broccoli
1.03 servings of chicken
ªºª º
«»« »
=
«»« »
«»« »
¬¼¬ ¼
x
Notice that the number of servings of broccoli has decreased as was desired.
72 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
4. Here are the data, assembled from Table 1 and Exercise 4:
Mg of Nutrients/Unit Nutrients
Required
soy soy
Nutrient (milligrams)
milk flour whey prot.
protein 36 51 13 80 33
carboh. 52 34 74 0 45
fat 0 7 1.1 3.4 3
calcium 1.26 .19 .8 .18 .8
a. Let x
1
, x
2
, x
3
, x
4
represent the number of units of nonfat milk, soy flour, whey, and isolated soy
protein, respectively. These amounts must satisfy the following matrix equation
1
2
3
4
36 51 13 80 33
52 34 74 0 45
071.13.4 3
1.26 .19 .8 .18 .8
x
x
x
x
ªº
ªºªº
«»
«»«»
«»«»«»
=
«»
«»«»
«»
«»«»
«»«»
«»
¬¼¬¼¬¼
b. [M]
36 51 13 80 33 000 .64
1
52 34 74 0 45 0 0 0 .54
1
~~
071.13.43 00 0.09
1
1.26 .19 .8 .18 .8 0 0 0 .21
1
ªºªº
«»«»
«»«»
⋅⋅⋅
«»«»
«»«»
«»«»
¬¼¬¼
The “solution” is x
1
= .64, x
2
= .54, x
3
= –.09, x
4
= –.21. This solution is not feasible, because the
mixture cannot include negative amounts of whey and isolated soy protein. Although the
coefficients of these two ingredients are fairly small, they cannot be ignored. The mixture of .64
units of nonfat milk and .54 units of soy flour provide 50.6 g of protein, 51.6 g of carbohydrate,
3.8 g of fat, and .9 g of calcium. Some of these nutrients are nowhere close to the desired
amounts.
5. Loop 1: The resistance vector is
1
22
1
3
4
Total of RI voltage drops for current
11
Voltage drop for is negative; flows in opposite direction
5
Current does not flow in loop 1
0
Current does not flow in loop 1
0
I
II
I
I
ªº
«»
«»
=«»
«»
¬¼
r
Loop 2: The resistance vector is
11
2
2
33
4
5Voltage drop for is negative; flows in opposite direction
10 Total of RI voltage drops for current
Voltage drop for is negative; flows in opposite direction
1
Current d
0
II
I
II
I
ªº
«»
«»
=«»
«»
¬¼
r
oes not flow in loop 2
Also, r
3
=
0
1
9
2
ªº
«»
«»
«»
«»
¬¼
, r
4
=
0
0
2
10
ª
º
«
»
«
»
«
»
«
»
¬
¼
, and R = [r
1
r
2
r
3
r
4
] =
500
11
510 0
1
09
12
00 10
2
ª
º
«
»
«
»
«
»
−−
«
»
¬
¼
.
1.10 Solutions 73
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Notice that each off-diagonal entry of R is negative (or zero). This happens because the loop current
directions are all chosen in the same direction on the figure. (For each loop j, this choice forces the
currents in other loops adjacent to loop j to flow in the direction opposite to current I
j
.)
Next, set v =
50
40
30
30
ªº
«»
«»
«»
«»
¬¼
. The voltages in loops 2 and 4 are negative because the battery orientation in
each loop is opposite to the direction chosen for positive current flow. Thus, the equation Ri = v
becomes
1
2
3
4
500 50
11
510 0 40
1
09 30
12
00 10 30
2
I
I
I
I
ªº
ªºªº
«»
«»«»
−−
«»
«»«»
=
«»
«»«»
−−
«»
«»«»
¬¼¬¼
¬¼
. [M]: The solution is i =
1
2
3
4
3.68
1.90
2.57
2.49
I
I
I
I
ªº
ª
º
«»
«
»
«»
«
»
=
«»
«
»
«»
«
»
¬
¼
¬¼
.
6. Loop 1: The resistance vector is
1
22
1
3
4
6Total of RI voltage drops for current
1Voltage drop for is negative; flows in opposite direction
0Current does not flow in loop 1
0Current does not flow in loop 1
I
II
I
I
ªº
«»
«»
=«»
«»
¬¼
r
Loop 2: The resistance vector is
11
2
2
33
4
Voltage drop for is negative; flows in opposite direction
1
9Total of RI voltage drops for current
4Voltage drop for is negative; flows in opposite direction
Current do
0
II
I
II
I
ªº
«»
«»
=«»
«»
¬¼
r
es not flow in loop 2
Also, r
3
=
0
4
7
2
ªº
«»
«»
«»
«»
¬¼
, r
4
=
0
0
2
7
ª
º
«
»
«
»
«
»
«
»
¬
¼
, and R = [r
1
r
2
r
3
r
4
]=
6100
1940
0472
0027
ª
º
«
»
−−
«
»
«
»
−−
«
»
¬
¼
. Set v =
30
20
40
10
ªº
«»
«»
«»
«»
¬¼
. Then Ri =
v becomes
1
2
3
4
6100 30
1940 20
0472 40
0027 10
I
I
I
I
ªº
ªºªº
«»
«»«»
−−
«»
«»«»
=
«»
«»«»
−−
«»
«»«»
¬¼¬¼
¬¼
. [M]: The solution is i =
1
2
3
4
6.36
8.14
11.73
4.78
I
I
I
I
ªº
ª
º
«»
«
»
«»
«
»
=
«»
«
»
«»
«
»
¬
¼
¬¼
.
74 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
7. Loop 1: The resistance vector is
1
22
1
3
4
4
Total of RI voltage drops for current
12
7Voltage drop for is negative; flows in opposite direction
0Current does not flow in loop 1
Voltage drop for is negative; flows
4
I
II
I
II
ªº
«»
«»
=«»
«»
¬¼
r
in opposite direction
Loop 2: The resistance vector is
1
1
2
2
33
4
Voltage drop for is negative; flows in opposite direction
7
15 Total of RI voltage drops for current
6Voltage drop for is negative; flows in opposite direction
0Current d
II
I
II
I
ªº
«»
«»
=«»
«»
¬¼
r
oes not flow in loop 2
Also, r
3
=
0
6
14
5
ªº
«»
«»
«»
«»
«»
¬¼
, r
4
=
4
0
5
13
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
, and R = [r
1
r
2
r
3
r
4
] =
12704
715 6 0
06145
40513
−−
ª
º
«
»
−−
«
»
«
»
−−
«
»
−−
«
»
¬
¼
.
Notice that each off-diagonal entry of R is negative (or zero). This happens because the loop current
directions are all chosen in the same direction on the figure. (For each loop j, this choice forces the
currents in other loops adjacent to loop j to flow in the direction opposite to current I
j
.)
Next, set v
40
30
20
10
ªº
«»
«»
=«»
«»
«»
¬¼
. Note the negative voltage in loop 4. The current direction chosen in loop 4 is
opposed by the orientation of the voltage source in that loop. Thus Ri = v becomes
1
2
3
4
12 7 0 4 40
715 6 0 30
06145 20
40513 10
I
I
I
I
−−
ªº
ªºªº
«»
«»«»
−−
«»«»«»
=
«»
«»«»
−−
«»
«»«»
−− −
«»«»
«»
¬¼¬¼¬¼
. [M]: The solution is i =
1
2
3
4
11.43
10.55
8.04
5.84
I
I
I
I
ªº
ª
º
«»
«
»
«»
«
»
=
«»
«
»
«»
«
»
«
»
«»
¬
¼¬¼
.
8. Loop 1: The resistance vector is
1
22
13
44
Total of RI voltage drops for current
9
1Voltage drop for is negative; flows in opposite direction
0Current does not flow in loop 1
Voltage drop for is negative;
1
4
I
II
I
II
ªº
«»
«»
«»
=«»
«»
«»
¬¼
r
55
flows in opposite direction
Voltage drop for is negative; flows in opposite directionII
Loop 2: The resistance vector is
1.10 Solutions 75
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
11
2
233
Voltage drop for is negative; flows in opposite direction
1
7Total of RI voltage drops for current
2Voltage drop for is negative; flows in opposite direction
0Current
3
II
I
II
ªº
«»
«»
«»
=
«»
«»
«»
¬¼
r
4
55
does not flow in loop 2
Voltage drop for is negative; flows in opposite direction
I
II
Also, r
3
=
0
2
10
3
3
ªº
«»
«»
«»
«»
«»
«»
¬¼
, r
4
=
1
0
3
7
2
ª
º
«
»
«
»
«
»
«
»
«
»
«
»
¬
¼
, r
5
=
4
3
3
2
12
ªº
«»
«»
«»
«»
«»
«»
¬¼
, and R =
91014
17 20 3
021033
10372
433212
−−
ª
º
«
»
−−
«
»
«
»
−−
«
»
−−−
«
»
«
»
−−
¬
¼
. Set v =
50
30
20
40
0
ªº
«»
«»
«»
«»
«»
«»
¬¼
. Note
the negative voltages for loops where the chosen current direction is opposed by the orientation of
the voltage source in that loop. Thus Ri = v becomes:
1
2
3
4
5
91014 50
17 20 3 30
021033 20
10372 40
433212 0
I
I
I
I
I
−−
ªº
ªºªº
«»
«»«»
−−− −
«»
«»« »
«»
«»« »
=
−−
«»
«»« »
−−− −
«»
«»« »
«»
«»« »
−−
¬¼¬ ¼
¬¼
. [M] The solution is
1
2
3
4
5
4.00
4.38
.90
5.80
.96
I
I
I
I
I
ªº
ª
º
«»
«
»
«»
«
»
«»
«
»
=
«»
«
»
«»
«
»
«»
«
»
¬
¼
¬¼
.
9. The population movement problems in this section assume that the total population is constant, with
no migration or immigration. The statement that “about 7% of the city’s population moves to the
suburbs” means also that the rest of the city’s population (93%) remain in the city. This determines
the entries in the first column of the migration matrix (which concerns movement from the city).
From:
City Suburbs To:
.93 City
.07 Suburbs
ªº
«»
¬¼
Likewise, if 5% of the suburban population moves to the city, then the other 95% remain in the
suburbs. This determines the second column of the migration matrix:, M = .93 .05
.07 .95
ª
º
«
»
¬
¼. The
difference equation is x
k+1
= Mx
k
for k = 0, 1, 2, …. Also, x
0
= 800,000
500,000
ª
º
«
»
¬
¼
The population in 2011 (when k = 1) is x
1
= Mx
0
= .93 .05 800,000 769,000
.07 .95 500,000 531,000
ª
ºª º ª º
=
«
»« » « »
¬
¼¬ ¼ ¬ ¼
The population in 2012 (when k = 2) is x
2
= Mx
1
= .93 .05 769,000 741,720
.07 .95 531,000 558,280
ª
ºª º ª º
=
«
»« » « »
¬
¼¬ ¼ ¬ ¼
76 CHAPTER 1 Linear Equations in Linear Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
10. The data in the first sentence implies that the migration matrix has the form:
From:
City Suburbs To:
.04 City
.06 Suburbs
ªº
«»
¬¼
The remaining entries are determined by the fact that the numbers in each column must sum to 1.
(For instance, if 6% of the city people move to the suburbs, then the rest, or 94%, remain in the city.)
So the migration matrix is M = .94 .04
.06 .96
ªº
«»
¬¼
. The initial population is x
0
= 10,000,000
800,000
ª
º
«
»
¬
¼.
The population in 2011 (when k = 1) is x
1
= Mx
0
= .94 .04 10,000,000 9,432,000
.06 .96 800,000 1,368,000
ª
ºª º ª º
=
«
»« » « »
¬
¼¬ ¼ ¬ ¼
The population in 2012 (when k = 2) is x
2
= Mx
1
= .94 .04 9,432,000 8,920,800
.06 .96 1,368,000 1,879,200
ª
ºª º ª º
=
«
»« » « »
¬
¼¬ ¼ ¬ ¼
11. The problem concerns two groups of people–those living in California and those living outside
California (and in the United States). It is reasonable, but not essential, to consider the people living
inside California first. That is, the first entry in a column or row of a vector will concern the people
living in California. With this choice, the migration matrix has the form:
From:
Calif. Outside To:
Calif.
Outside
ªº
«»
¬¼
a. For the first column of the migration matrix M, compute
{
}
{}
Calif. persons
who moved 516,100 .016372
Total Calif. pop. 31,524,000
==
The other entry in the first column is 1 – .016372 = .983628. The exercise requests that 5 decimal
places be used. So this number should be rounded to .98363. Whatever number of decimal places
is used, it is important that the two entries sum to 1. So, for the first fraction, use .01637.
For the second column of M, compute
{
}
{}
outside persons
who moved 381,262 .00167
Total outside pop. 228,680,000
==
. The other
entry is 1 – .00167 = .99833. Thus, the migration matrix is
From:
Calif. Outside To:
.98363 .00167 Calif.
.01637 .99833 Outside
ªº
«»
¬¼
b. [M] The initial vector is x
0
= (31.524, 228.680), with data in millions of persons. Since x
0
describes the population in 1994, and x
1
describes the population in 1995, the vector x
6
describes
the projected population for the year 2000, assuming that the migration rates remain constant and
1.10 Solutions 77
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
there are no deaths, births, or migration. Here are the vectors x
0
through x
6
with the first 5 figures
displayed. Numbers are in millions of persons:
x
0
=31.524 31.390 31.258 31.129 31.002 30.877 30.755
,,,,,,
228.68 228.82 228.95 229.08 229.20 229.33 229.45
ªºªºªºªºªºªºªº
«»«»«»«»«»«»«»
¬¼¬¼¬¼¬¼¬¼¬¼¬¼
= x
6
.
12. Set M =
0
.97 .05 .10 295
.00 .90 .05 and 55
.03 .05 .85 150
ªºªº
«»«»
=
«»«»
«»«»
¬¼¬¼
x
. Then x
1
=
.97 .05 .10 295 304
.00 .90 .05 55 57
.03 .05 .85 150 139
ª
ºª º ª º
«
»« » « »
«
»« » « »
«
»« » « »
¬
¼¬ ¼ ¬ ¼
, and
x
2
=
.97 .05 .10 304 312
.00 .90 .05 57 58
.03 .05 .85 139 130
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
. The entries in x
2
give the approximate distribution of cars on
Wednesday, two days after Monday.
13. [M] The order of entries in a column of a migration matrix must match the order of the columns. For
instance, if the first column concerns the population in the city, then the first entry in each column
must be the fraction of the population that moves to (or remains in) the city. In this case, the data in
the exercise leads to M = .95 .03
.05 .97
ªº
«»
¬¼
and x
0
= 600,000
400,000
ª
º
«
»
¬
¼
a. Some of the population vectors are
5101520
523,293 472,737 439,417 417,456
,,,
476,707 527,263 560,583 582,544
ªºªºªºªº
====
«»«»«»«»
¬¼¬¼¬¼¬¼
xx x x
The data here shows that the city population is declining and the suburban population is
increasing, but the changes in population each year seem to grow smaller.
b. When x
0
= 350,000
650,000
ªº
«»
¬¼
, the situation is different. Now
5101520
358,523 364,140 367,843 370,283
,,,
641, 477 635,860 632,157 629,717
ªºªºªºªº
====
«»«»«»«»
¬¼¬¼¬¼¬¼
xx x x
The city population is increasing slowly and the suburban population is decreasing. No other
conclusions are expected. (This example will be analyzed in greater detail later in the text.)
14. Here are Figs. (a) and (b) for Exercise 13, followed by the figure for Exercise 34 in Section 1.1:
10˚
10˚
40˚
40˚
20˚ 20˚
30˚ 30˚
12
43
20˚ 20˚
20˚ 20˚
12
43
10˚
10˚
40˚
40˚
0˚ 0˚
10˚ 10˚
12
43
(b) Section 1.1
(a)
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
For Fig. (a), the equations are
124
21 3
342
413
4020
4200
4020
40 20
TTT
TT T
TTT
TTT
=+ + +
=+ ++
=+++
=+ + +
To solve the system, rearrange the equations and row reduce the augmented matrix. Interchanging
rows 1 and 4 speeds up the calculations. The first five steps are shown in detail.
410120 101420 101420 101420
14 1020 14 10 20 04 0 4 0 0 10 1 0
~~~
014120 014 120 014120 014120
10 1420 4 10 120 0 1415100 0 1415100
−− −− −− −−
−− −− − −
−− − − − − − −
−− −− −− −−
ªºªºªºªº
«»«»«»«»
«»«»«»«»
«»«»«»«»
«»«»«»«»
¬¼¬¼¬¼¬¼
10 1 4 20 10 1 4 20 100010
01 0 1 0 010 1 0 010010
~~~
00 4 2 20 004 2 20 001010
00 414100 00012120 000110
~
−− −−
−−
⋅⋅⋅
−−
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
For Fig (b), the equations are
124
21 3
342
413
4100
4 0 40
44010
410 10
TTT
TT T
TTT
TTT
=+++
=++ +
=+++
=+++
Rearrange the equations and row reduce the augmented matrix:
410110 100010
14 1040 010017.5
014150 001020
10 1420 000112.5
~~
−−
−−
−−
−−
ªºªº
«»«»
«»«»
⋅⋅⋅
«»«»
«»«»
¬¼¬¼
a. Here are the solution temperatures for the three problems studied:
Fig. (a) in Exercise 14 of Section 1.10: (10, 10, 10, 10)
Fig. (b) in Exercise 14 of Section 1.10: (10, 17.5, 20, 12.5)
Figure for Exercises 34 in Section 1.1 (20, 27.5, 30, 22.5)
When the solutions are arranged this way, it is evident that the third solution is the sum of the first
two solutions. What might not be so evident is that list of boundary temperatures of the third
problem is the sum of the lists of boundary temperatures of the first two problems. (The
temperatures are listed clockwise, starting at the left of T
1
.)
Fig. (a): ( 0, 20, 20, 0, 0, 20, 20, 0)
Fig. (b): (10, 0, 0, 40, 40, 10, 10, 10)
Fig. from Section 1.1: (10, 20, 20, 40, 40, 30, 30, 10)
b. When the boundary temperatures in Fig. (a) are multiplied by 3, the new interior temperatures are
also multiplied by 3.
c. The correspondence from the list of eight boundary temperatures to the list of four interior
temperatures is a linear transformation. A verification of this statement is not expected. However,
it can be shown that the solutions of the steady-state temperature problem here satisfy a
superposition principle. The system of equations that approximate the interior temperatures can
Chapter 1 • Supplementary Exercises 79
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
be written in the form Ax = b, where A is determined by the arrangement of the four interior
points on the plate and b is a vector in R
4
determined by the boundary temperatures.
Note:
The MATLAB box in the Study Guide for Section 1.10 discusses scientific notation and shows
how to generate a matrix whose columns list the vectors x
0
, x
1
, x
2
, …, determined by an equation
x
k+1
= Mx
k
for k = 0 , 1, ….
Chapter 1 SUPPLEMENTARY EXERCISES
1. a. False. (The word “reduced” is missing.) Counterexample:
12 1 2 12
,,
34 0 2 0 1
AB C
ªº ª º ªº
== =
«» « » «»
¬¼ ¬ ¼ ¬¼
The matrix A is row equivalent to matrices B and C, both in echelon form.
b. False. Counterexample: Let A be any n×n matrix with fewer than n pivot columns. Then the
equation Ax = 0 has infinitely many solutions. (Theorem 2 in Section 1.2 says that a system has
either zero, one, or infinitely many solutions, but it does not say that a system with infinitely
many solutions exists. Some counterexample is needed.)
c. True. If a linear system has more than one solution, it is a consistent system and has a free
variable. By the Existence and Uniqueness Theorem in Section 1.2, the system has infinitely
many solutions.
d. False. Counterexample: The following system has no free variables and no solution:
12
2
12
1
5
2
xx
x
xx
+=
=
+=
e. True. See the box after the definition of elementary row operations, in Section 1.1. If [A b] is
transformed into [C d] by elementary row operations, then the two augmented matrices are row
equivalent.
f. True. Theorem 6 in Section 1.5 essentially says that when Ax = b is consistent, the solution sets
of the nonhomogeneous equation and the homogeneous equation are translates of each other. In
this case, the two equations have the same number of solutions.
g. False. For the columns of A to span R
m
, the equation Ax = b must be consistent for all b in R
m
,
not for just one vector b in R
m
.
h. False. Any matrix can be transformed by elementary row operations into reduced echelon form,
but not every matrix equation Ax = b is consistent.
i. True. If A is row equivalent to B, then A can be transformed by elementary row operations first
into B and then further transformed into the reduced echelon form U of B. Since the reduced
echelon form of A is unique, it must be U.
j. False. Every equation Ax = 0 has the trivial solution whether or not some variables are free.
k. True, by Theorem 4 in Section 1.4. If the equation Ax = b is consistent for every b in R
m
, then A
must have a pivot position in every one of its m rows. If A has m pivot positions, then A has m
pivot columns, each containing one pivot position.
l. False. The word “unique” should be deleted. Let A be any matrix with m pivot columns but more
than m columns altogether. Then the equation Ax = b is consistent and has m basic variables and
at least one free variable. Thus the equation does not does not have a unique solution.
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
m. True. If A has n pivot positions, it has a pivot in each of its n columns and in each of its n rows.
The reduced echelon form has a 1 in each pivot position, so the reduced echelon form is the n×n
identity matrix.
n. True. Both matrices A and B can be row reduced to the 3×3 identity matrix, as discussed in the
previous question. Since the row operations that transform B into I
3
are reversible, A can be
transformed first into I
3
and then into B.
o. True. The reason is essentially the same as that given for question f.
p. True. If the columns of A span R
m
, then the reduced echelon form of A is a matrix U with a pivot
in each row, by Theorem 4 in Section 1.4. Since B is row equivalent to A, B can be transformed
by row operations first into A and then further transformed into U. Since U has a pivot in each
row, so does B. By Theorem 4, the columns of B span R
m
.
q. False. See Example 5 in Section 1.7.
r. True. Any set of three vectors in R
2
would have to be linearly dependent, by Theorem 8 in
Section 1.7.
s. False. If a set {v
1
, v
2
, v
3
, v
4
} were to span R
5
, then the matrix A = [v
1
v
2
v
3
v
4
] would have
a pivot position in each of its five rows, which is impossible since A has only four columns.
t. True. The vector –u is a linear combination of u and v, namely, –u = (–1)u + 0v.
u. False. If u and v are multiples, then Span{u, v} is a line, and w need not be on that line.
v. False. Let u and v be any linearly independent pair of vectors and let w = 2v. Then w = 0u + 2v,
so w is a linear combination of u and v. However, u cannot be a linear combination of v and w
because if it were, u would be a multiple of v. That is not possible since {u, v} is linearly
independent.
w. False. The statement would be true if the condition v
1
is not zero were present. See Theorem 7 in
Section 1.7. However, if v
1
= 0, then {v
1
, v
2
, v
3
} is linearly dependent, no matter what else might
be true about v
2
and v
3
.
x. True. “Function” is another word used for “transformation” (as mentioned in the definition of
“transformation” in Section 1.8), and a linear transformation is a special type of transformation.
y. True. For the transformation x 6 Ax to map R
5
onto R
6
, the matrix A would have to have a pivot
in every row and hence have six pivot columns. This is impossible because A has only five
columns.
z. False. For the transformation x 6 Ax to be one-to-one, A must have a pivot in each column.
Since A has n columns and m pivots, m might be less than n.
2. If a 0, then x = b/a; the solution is unique. If a = 0, and b 0, the solution set is empty, because
0x = 0 b. If a = 0 and b = 0, the equation 0x = 0 has infinitely many solutions.
3. a. Any consistent linear system whose echelon form is
*** * ** 0 **
0** or 00* or 00*
0000 0000 0000
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
 

b. Any consistent linear system whose coefficient matrix has reduced echelon form I
3
.
c. Any inconsistent linear system of three equations in three variables.
4. Since there are three pivots (one in each row), the augmented matrix must reduce to the form
Chapter 1 • Supplementary Exercises 81
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
***
0**
00 *
ªº
«»
«»
«»
¬¼
. A solution of Ax = b exists for all b because there is a pivot in each row of A. Each
solution is unique because there are no free variables.
5. a.
13 1 3
~
4801284
kk
hhk
ªºª º
«»« »
−−
¬¼¬ ¼
. If h = 12 and k v 2, the second row of the augmented matrix
indicates an inconsistent system of the form 0x
2
= b, with b nonzero. If h = 12, and k = 2, there is
only one nonzero equation, and the system has infinitely many solutions. Finally, if h v 12, the
coefficient matrix has two pivots and the system has a unique solution.
b.
212 1
~
62031
hh
kkh
−−
ªºª º
«»« »
+
¬¼¬ ¼
. If k + 3h = 0, the system is inconsistent. Otherwise, the
coefficient matrix has two pivots and the system has a unique solution.
6. a. Set
12 3
427
,,
8310
ªº ª º ª º
== =
«» « » « »
¬¼ ¬ ¼ ¬ ¼
vv v
, and
5
3
ª
º
=
«
»
¬
¼
b
. “Determine if b is a linear combination of v
1
,
v
2
, v
3
.” Or, “Determine if b is in Span{v
1
, v
2
, v
3
}.” To do this, compute
4275 4275
~
83103 0147
−− − −
ªºªº
«»«»
−− −
¬¼¬¼
. The system is consistent, so b is in Span{v
1
, v
2
, v
3
}.
b. Set A =
427 5
,
8310 3
−−
ªºªº
=
«»«»
−−
¬¼¬¼
b
. “Determine if b is a linear combination of the columns of A.”
c. Define T(x) = Ax. “Determine if b is in the range of T.”
7. a. Set
123
242
5, 1, 1
753
−−
ªº ªº ªº
«» «» «»
===
«» «» «»
«» «» «»
−−
¬¼ ¬¼ ¬¼
vv v
and
1
2
3
b
b
b
ª
º
«
»
=
«
»
«
»
¬
¼
b
. “Determine if v
1
, v
2
, v
3
span R
3
.” To do this,
row reduce [v
1
v
2
v
3
]:
242 242 242
511~094~094
753 094 000
−− −− −−
ªºªºªº
«»«»«»
−−−−
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
. The matrix does not have a pivot in each row,
so its columns do not span R
3
, by Theorem 4 in Section 1.4.
b. Set A =
242
511
753
−−
ªº
«»
«»
«»
−−
¬¼
. “Determine if the columns of A span R
3
.”
c. Define T(x) = Ax. “Determine if T maps R
3
onto R
3
.”
8. a.
** ** 0 *
,,
0*00 00
ªºªºªº
«»«»«»
¬¼¬¼¬¼
 

b.
**
0*
00
ª
º
«
»
«
»
«
»
¬
¼
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
9. The first line is the line spanned by
1
2
ªº
«»
¬¼
. The second line is spanned by
2
1
ª
º
«
»
¬
¼. So the problem is to
write
5
6
ªº
«»
¬¼
as the sum of a multiple of
1
2
ª
º
«
»
¬
¼ and a multiple of
2
1
ª
º
«
»
¬
¼. That is, find x
1
and x
2
such that
12
215
126
xx
ªº ªº ªº
+=
«» «» «»
¬¼ ¬¼ ¬¼
. Reduce the augmented matrix for this equation:
215 126 1 2 6 12 6 104/3
~~ ~ ~
126 2 15 0 3 7 0 17/3 0 17/3
ªºªºª ºª ºª º
«»«»« »« »« »
−−
¬¼¬¼¬ ¼¬ ¼¬ ¼
Thus,
47
33
521
612
ªº ªº ªº
=+
«» «» «»
¬¼ ¬¼ ¬¼
or
58/3 7/3
64/314/3
ªº ª º ª º
=+
«» « » « »
¬¼ ¬ ¼ ¬ ¼
.
10. The line through a
1
and the origin and the line through a
2
and the origin determine a “grid” on the
x
1
x
2
-plane as shown below. Every point in R
2
can be described uniquely in terms of this grid. Thus, b
can be reached from the origin by traveling a certain number of units in the a
1
-direction and a certain
number of units in the a
2
-direction.
11. A solution set is a line when the system has one free variable. If the coefficient matrix is 2×3, then
two of the columns should be pivot columns. For instance, take
12*
03*
ª
º
«
»
¬
¼. Put anything in column
3. The resulting matrix will be in echelon form. Make one row replacement operation on the second
row to create a matrix not in echelon form, such as
121 12 1
~
031 152
ª
ºª º
«
»« »
¬
¼¬ ¼
12. A solution set is a plane where there are two free variables. If the coefficient matrix is 2×3, then only
one column can be a pivot column. The echelon form will have all zeros in the second row. Use a
row replacement to create a matrix not in echelon form. For instance, let A =
123
123
ªº
«»
¬¼
.
13. The reduced echelon form of A looks like
10 *
01*
000
E
ª
º
«
»
=
«
»
«
»
¬
¼
. Since E is row equivalent to A, the
equation Ex = 0 has the same solutions as Ax = 0. Thus
10 * 3 0
01* 2 0
000 1 0
ª
ºª º ª º
«
»« » « »
=
«
»« » « »
«
»« » « »
¬
¼¬ ¼ ¬ ¼
.
x
1
x
2
a
2
a
1
b
Chapter 1 • Supplementary Exercises 83
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
By inspection,
10 3
01 2
00 0
E
ªº
«»
=
«»
«»
¬¼
.
14. Row reduce the augmented matrix for
12
10
20
a
xx
aa
ª
ºªºªº
+=
«
»«»«»
+
¬
¼¬¼¬¼
(*).
2
10
10 1 0
~
20 0(2 )(1 )0
02 0
a
aa
aa aa
aa
ªº
ªº ª º
=
«»
«» « »
++
+
¬¼ ¬ ¼
¬¼
The equation (*) has a nontrivial solution only when (2 – a)(1 + a) = 0. So the vectors are linearly
independent for all a except a = 2 and a = –1.
15. a. If the three vectors are linearly independent, then a, c, and f must all be nonzero. (The converse is
true, too.) Let A be the matrix whose columns are the three linearly independent vectors. Then A
must have three pivot columns. (See Exercise 30 in Section 1.7, or realize that the equation
Ax = 0 has only the trivial solution and so there can be no free variables in the system of
equations.) Since A is 3×3, the pivot positions are exactly where a, c, and f are located.
b. The numbers a, …, f can have any values. Here's why. Denote the columns by v
1
, v
2
, and v
3
.
Observe that v
1
is not the zero vector. Next, v
2
is not a multiple of v
1
because the third entry of v
2
is nonzero. Finally, v
3
is not a linear combination of v
1
and v
2
because the fourth entry of v
3
is
nonzero. By Theorem 7 in Section 1.7, {v
1
, v
2
, v
3
} is linearly independent.
16. Denote the columns from right to left by v
1
, …, v
4
. The “first” vector v
1
is nonzero, v
2
is not a
multiple of v
1
(because the third entry of v
2
is nonzero), and v
3
is not a linear combination of v
1
and
v
2
(because the second entry of v
3
is nonzero). Finally, by looking at first entries in the vectors, v
4
cannot be a linear combination of v
1
, v
2
, and v
3
. By Theorem 7 in Section 1.7, the columns are
linearly independent.
17. Here are two arguments. The first is a “direct” proof. The second is called a “proof by contradiction.”
i. Since {v
1
, v
2
, v
3
} is a linearly independent set, v
1
0. Also, Theorem 7 shows that v
2
cannot be a
multiple of v
1
, and v
3
cannot be a linear combination of v
1
and v
2
. By hypothesis, v
4
is not a linear
combination of v
1
, v
2
, and v
3
. Thus, by Theorem 7, {v
1
, v
2
, v
3
, v
4
} cannot be a linearly dependent
set and so must be linearly independent.
ii. Suppose that {v
1
, v
2
, v
3
, v
4
} is linearly dependent. Then by Theorem 7, one of the vectors in the
set is a linear combination of the preceding vectors. This vector cannot be v
4
because v
4
is not in
Span{v
1
, v
2
, v
3
}. Also, none of the vectors in {v
1
, v
2
, v
3
} is a linear combinations of the preceding
vectors, by Theorem 7. So the linear dependence of {v
1
, v
2
, v
3
, v
4
} is impossible. Thus {v
1
, v
2
, v
3
,
v
4
} is linearly independent.
18. Suppose that c
1
and c
2
are constants such that
c
1
v
1
+ c
2
(v
1
+ v
2
) = 0 (*)
Then (c
1
+ c
2
)v
1
+ c
2
v
2
= 0. Since v
1
and v
2
are linearly independent, both c
1
+ c
2
= 0 and c
2
= 0. It
follows that both c
1
and c
2
in (*) must be zero, which shows that {v
1
, v
1
+ v
2
} is linearly independent.
19. Let M be the line through the origin that is parallel to the line through v
1
, v
2
, and v
3
. Then v
2
v
1
and
v
3
v
1
are both on M. So one of these two vectors is a multiple of the other, say v
2
v
1
= k(v
3
v
1
).
This equation produces a linear dependence relation (k – 1)v
1
+ v
2
kv
3
= 0.
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
A second solution: A parametric equation of the line is x = v
1
+ t(v
2
v
1
). Since v
3
is on the line,
there is some t
0
such that v
3
= v
1
+ t
0
(v
2
v
1
) = (1 – t
0
)v
1
+ t
0
v
2
. So v
3
is a linear combination of v
1
and v
2
, and {v
1
, v
2
, v
3
} is linearly dependent.
20. If T(u) = v, then since T is linear,
T(–u) = T((–1)u) = (–1)T(u) = –v.
21. Either compute T(e
1
), T(e
2
), and T(e
3
) to make the columns of A, or write the vectors vertically in the
definition of T and fill in the entries of A by inspection:
11
22
33
??? 100
?? , 010
??? 001
xx
AAxxA
xx
ªºªºªºª º
«»«»«»« »
===
«»«»«»« »
«»«»«»« »
¬¼¬¼¬¼¬ ¼
x
22. By Theorem 12 in Section 1.9, the columns of A span R
3
. By Theorem 4 in Section 1.4, A has a pivot
in each of its three rows. Since A has three columns, each column must be a pivot column. So the
equation Ax = 0 has no free variables, and the columns of A are linearly independent. By Theorem 12
in Section 1.9, the transformation x 6 Ax is one-to-one.
23.
45 4 3 5
implies that
30 3 4 0
ab ab
ba ab
=
−−=
ªºªºªº
«»«»«» +=
¬¼¬¼¬¼ . Solve:
435 4 3 5 43 5 4016/5 104/5
~~~~
340 025/415/4 013/5 013/5 013/5
−− −
ªºª ºª ºª ºª º
«»« »« »« »« »
−−
¬¼¬ ¼¬ ¼¬ ¼¬ ¼
Thus a = 4/5 and b = –3/5.
24. The matrix equation displayed gives the information
2425
ab= and
420.ab+=
Solve for a and
b:
2425 12 5 101/5
2425
~~ ~
42 0 010 45 0 1 2/5 0 1 2/5
ª
ºª ºª º
ªº
−−
«
»« »« »
«»
−− −
«
»« »« »
¬¼
¬
¼¬ ¼¬ ¼
So
1/ 5, 2/ 5.ab==
25. a. The vector lists the number of three-, two-, and one-bedroom apartments provided when x
1
floors
of plan A are constructed.
b.
123
345
743
889
xxx
ªº ªº ªº
«» «» «»
++
«» «» «»
«» «» «»
¬¼ ¬¼ ¬¼
c. [M] Solve
123
34566
74374
889136
xx x
ªº ªº ªº ª º
«» «» «» « »
++=
«» «» «» « »
«» «» «» « »
¬¼ ¬¼ ¬¼ ¬ ¼
13
23
345 66 10 1/2 2 (1 / 2) 2
743 74~ 0113/815 (13 / 8) 15
889136 00 0 0 00
xx
xx
−−=
ªºªº
«»«»
⋅⋅⋅ + =
«»«»
«»«» =
¬¼¬¼
The general solution is
Chapter 1 • Supplementary Exercises 85
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
13
233
33
2(1/2) 2 1/2
15 (13/ 8) 15 13/8
01
xx
xxx
xx
+
ªºª ºªº ª º
«»« »«» « »
===+
«»« »«» « »
«»« »«» « »
¬¼¬ ¼¬¼ ¬ ¼
x
However, the only feasible solutions must have whole numbers of floors for each plan. Thus, x
3
must be a multiple of 8, to avoid fractions. One solution, for x
3
= 0, is to use 2 floors of plan A
and 15 floors of plan B. Another solution, for x
3
= 8, is to use 6 floors of plan A , 2 floors of plan
B, and 8 floors of plan C. These are the only feasible solutions. A larger positive multiple of 8 for
x
3
makes x
2
negative. A negative value for x
3
, of course, is not feasible either.
Copyright © 2012 Pea
r
2.1 SOLUTIONS
Notes:
The definition here of a matri
x
calculations. (The dual fact about the r
o
vectors here are usually written as c
o
reinforce the definition of AB.
Exercises 23 and 24 are used in the
23–25 are mentioned in a footnote in S
e
can provide a transition to Section 2.2.
O
Exercises 27 and 28 are optional, b
u
also appear in Exercises 31–34 of Secti
o
Section 7.1. Exercises 29–33 provide goo
1. 201 4
2(2)
452 8
1
A
−−
ªºª
==
«»«
−−
¬¼¬
751 4
2143 8
BA
−−
ªºª
=+
«»«
−− −
¬¼¬
The product AC is not defined beca
u
rows of C. 12 35
21 14
CD ªºªº
=«»«»
−−
¬¼¬¼
computation, the row-column rule i
2. 201 75
33
452 14
AB
−−
ªºª
+= +
«»«
−−
¬¼¬
The expression 2C – 3E is not defi
n
357 5 1
3
141 4 3
1
DB
ªºª ºª
==
«»« »«
−−
¬¼¬ ¼¬
The product EC is not defined beca
u
rows of C.
r
son Education, Inc. Publishing as Addison-Wesley.
x
product AB gives the proper view of AB for near
l
o
ws of A and the rows of AB is seldom needed, ma
i
o
lumns.) I assign Exercise 13 and most of Exercis
e
proof of the Invertible Matrix Theorem, in Section 2
.
e
ction 2.2. A class discussion of the solutions of Exe
r
O
r, these exercises could be assigned after starting Se
c
u
t they are mentioned in Example 4 of Section 2.4. O
u
o
n 4.6 and in the spectral decomposition of a symmet
r
d training for mathematics majors.
02
1
04
º
»
¼. Next, use B – 2A = B + (–2A):
02 353
10 4 7 6 7
ºª º
=
»« »
−− −
¼¬ ¼
u
se the number of columns of A does not match the n
u
13 2( 1) 15 2 4 1 13
23 1(1) 25 14 7 6
+−⋅+
ªºªº
==
«»«»
−⋅ +−−+⋅−
¬¼¬¼
. For men
s probably easier to use than the definition.
1221015132315
2
343512297177
+−−+
ºª ºª
==
»« »«
+−− − −
¼¬ ¼¬
n
ed because 2C has 2 columns and –3E has only 1 col
u
3
751 3(5)5(4) 315(3) 26 3
1
741 1(5)4(4) 114(3) 3 1
+⋅−+−⋅+−−
ºª
=
»«
+⋅−+−−+−−
¼¬
u
se the number of columns of E does not match the n
u
87
l
y all matrix
i
nly because
e
s 17–22 to
.
3. Exercises
r
cises 23–25
c
tion 2.2.
u
ter products
r
ic matrix, in
u
mber of
tal
2
º
»
¼
u
mn.
512
113
º
»
¼
u
mber of
88 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
3.
2
30 2 5 320(5) 15
303 3 2 033(2) 35
IA
−−
ªºª ºª ºª º
===
«»« »« »« »
−−
¬¼¬ ¼¬ ¼¬ ¼
22
25 615
(3 ) 3( ) 3 32 9 6
IA IA
−−
ªºª º
== =
«»« »
−−
¬¼¬ ¼
, or
2
302 5 3203(5)0 6 15
(3 ) 033 2 03303(2) 9 6
IA
−⋅++
ªºª ºª ºª º
== =
«»« »« »« »
++−−
¬¼¬ ¼¬ ¼¬ ¼
4.
3
513 500 013
5436050426
312 005 313
AI
−−
ªºªºªº
«»«»«»
=−−=−−−
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
33
513 25515
(5 ) 5( ) 5 5 4 3 6 20 15 30
312 15510
IA IA A
−−
ªºª º
«»« »
===−−=−−
«»« »
«»« »
−−
¬¼¬ ¼
, or
3
500 5 1 3
(5 ) 0 5 0 4 3 6
005 3 1 2
IA
ªºª º
«»« »
=−−
«»« »
«»« »
¬¼¬ ¼
55 0 0 5( 1) 0 0 53 0 0 25 5 15
0 5( 4) 0 0 5 3 0 0 5( 6) 0 20 15 30
005(3) 0051 0052 15 5 10
++ ++ ++
ªºª º
«»« »
=++++++=−−
«»« »
«»« »
++ ++++⋅−
¬¼¬ ¼
5. a.
12
13 10 13 11
42
24 0, 24 8
23
53 26 53 19
AA
−−
ªºªº ªºªº
ªº ªº
«»«» «»«»
====
«» «»
«»«» «»«»
¬¼ ¬¼
«»«» «»«»
−−
¬¼¬¼ ¬¼¬¼
bb
[]
12
10 11
08
26 19
AB A A
ªº
«»
==
«»
«»
¬¼
bb
b.
13 14 3( 2) 1( 2) 33 10 11
42
24 24 4(2) 2(2) 43 0 8
23
53 54 3(2) 5(2) 33 26 19
−−+ − − +⋅ −
ªº ª ºª º
ªº
«» « »« »
=+−−+=
«»
«» « »« »
¬¼
«» « »« »
−⋅−−⋅−
¬¼ ¬ ¼¬ ¼
6. a.
12
43 5 43 22
14
35 12, 35 22
32
01 3 01 2
AA
−− −
ªºªº ªºªº
ªº ªº
«»«» «»«»
====
«» «»
«»«» «»«»
¬¼ ¬¼
«»«» «»«»
¬¼¬¼ ¬¼¬¼
bb
[]
12
522
12 22
32
AB A A
ªº
«»
==
«»
«»
¬¼
bb
2.1 Solutions 89
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b.
43 4133443(2) 522
14
35 3153345(2) 1222
32
01 0113041(2) 3 2
−⋅⋅⋅−−
ªº ª ºª º
ªº
«» « »« »
=−⋅ +⋅−+=
«»
«» « »« »
¬¼
«» « »« »
+⋅⋅+−−
¬¼ ¬ ¼¬ ¼
7. Since A has 3 columns, B must match with 3 rows. Otherwise, AB is undefined. Since AB has 7
columns, so does B. Thus, B is 3×7.
8. The number of rows of B matches the number of rows of BC, so B has 5 rows.
9.
23 19 7183
,
11 3 4 9
k
AB kk
+
ªºªºª º
==
«»«»« »
−− −+
¬¼¬¼¬ ¼
while
1 9 2 3 7 12
31169
BA kkk
ª
ºª º ª º
==
«
»« » « »
−− −+
¬
¼¬ ¼ ¬ ¼
.
Then AB = BA if and only if 18 + 3k = 12 and –4 = –6 k, which happens if and only if k = –2.
10.
3611 2121 3635 2121
,
1234 7 7 122 1 7 7
AB AC
−− − − − −
ªºªºª ºªºªºª º
=== =
«»«»« »«»«»« »
−−
¬¼¬¼¬ ¼¬¼¬¼¬ ¼
11.
123500 5 6 6
245030 101210
356002 151512
AD
ªºªºª º
«»«»« »
==
«»«»« »
«»«»« »
¬¼¬¼¬ ¼
500123 51015
030245 61215
002356 61012
DA
ªºªºª º
«»«»« »
==
«»«»« »
«»«»« »
¬¼¬¼¬ ¼
Right-multiplication (that is, multiplication on the right) by the diagonal matrix D multiplies each
column of A by the corresponding diagonal entry of D. Left-multiplication by D multiplies each row
of A by the corresponding diagonal entry of D. To make AB = BA, one can take B to be a multiple of
I
3
. For instance, if B = 4I
3
, then AB and BA are both the same as 4A.
12. Consider B = [b
1
b
2
]. To make AB = 0, one needs Ab
1
= 0 and Ab
2
= 0. By inspection of A, a
suitable
b
1
is
2,
1
ªº
«»
¬¼
or any multiple of
2
1
ªº
«»
¬¼
. Example:
26
.
13
B
ª
º
=
«
»
¬
¼
13. Use the definition of AB written in reverse order: [Ab
1
Ab
p
] = A[b
1
b
p
]. Thus
[Qr
1
Qr
p
] = QR, when R = [r
1
r
p
].
14. By definition, UQ = U[q
1
q
4
] = [Uq
1
Uq
4
]. From Example 6 of Section 1.8, the vectorUq
1
lists the total costs (material, labor, and overhead) corresponding to the amounts of products B andC
specified in the vector q
1
. That is, the first column of UQ lists the total costs for materials, labor, and
overhead used to manufacture products B and C during the first quarter of the year. Columns 2, 3,and
4 of UQ list the total amounts spent to manufacture B and C during the 2
nd
, 3
rd
, and 4
th
quarters,
respectively.
90 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
15. a. False. See the definition of AB.
b. False. The roles of A and B should be reversed in the second half of the statement. See the box
after Example 3.
c. True. See Theorem 2(b), read right to left.
d. True. See Theorem 3(b), read right to left.
e. False. The phrase “in the same order” should be “in the reverse order.” See the box after Theorem
3.
16. a. True. See the box after Example 4.
b. False. AB must be a 3×3 matrix, but the formula given here implies that it is a 3×1 matrix. The
plus signs should just be spaces (between columns). This is a common mistake.
c. True. Apply Theorem 3(d) to A
2
=AA
d. False. The left-to-right order of (ABC)
T
, is C
T
B
T
A
T
. The order cannot be changed in general.
e. True. This general statement follows from Theorem 3(b).
17. Since
[]
12
311 ,
117AB A A
−−
ªº
==
«»
¬¼
bb
the first column of B satisfies the equation
1.
6
A
ªº
=«»
¬¼
x Row
reduction:
[]
1
133 103
~~
351 012
AA
−−
ª
ºª º
«
»« »
¬
¼¬ ¼
b. So b
1
=
3.
2
ª
º
«
»
¬
¼
Similarly,
[]
2
1311 101
~~
3517 014
AA
−−
ªºªº
«»«»
¬¼¬¼
b and b
2
=
1.
4
ª
º
«
»
¬
¼
Note:
An alternative solution of Exercise 17 is to row reduce [A Ab
1
Ab
2
] with one sequence of row
operations. This observation can prepare the way for the inversion algorithm in Section 2.2.
18. The third column of AB is also all zeros because Ab
3
= A0 = 0
19. (A solution is in the text). Write B = [b
1
b
2
b
3
]. By definition, the third column of AB is Ab
3
. By
hypothesis, b
3
= b
1
+ b
2
. So Ab
3
= A(b
1
+ b
2
) = Ab
1
+ Ab
2
, by a property of matrix-vector
multiplication. Thus, the third column of AB is the sum of the first two columns of AB.
20. The first two columns of AB are Ab
1
and Ab
2
. They are equal since b
1
and b
2
are equal.
21. Let b
p
be the last column of B. By hypothesis, the last column of AB is zero. Thus, Ab
p
= 0.
However, b
p
is not the zero vector, because B has no column of zeros. Thus, the equation Ab
p
= 0 is a
linear dependence relation among the columns of A, and so the columns of A are linearly dependent.
Note:
The text answer for Exercise 21 is, “The columns of A are linearly dependent. Why?” The Study
Guide supplies the argument above, in case a student needs help.
22. If the columns of B are linearly dependent, then there exists a nonzero vector x such that Bx = 0.
From this, A(Bx) = A0 and (AB)x = 0 (by associativity). Since x is nonzero, the columns of AB must
be linearly dependent.
23. If x satisfies Ax = 0, then CAx = C0 = 0 and so I
n
x = 0 and x = 0. This shows that the equation Ax = 0
has no free variables. So every variable is a basic variable and every column of A is a pivot column.
(A variation of this argument could be made using linear independence and Exercise 30 in Section
1.7.) Since each pivot is in a different row, A must have at least as many rows as columns.
2.1 Solutions 91
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
24. Write I
3
=[e
1
e
2
e
3
] and D = [d
1
d
2
d
3
]. By definition of AD, the equation AD = I
3
is equivalent
|to the three equations Ad
1
= e
1
, Ad
2
= e
2
, and Ad
3
= e
3
. Each of these equations has at least one
solution because the columns of A span R
3
. (See Theorem 4 in Section 1.4.) Select one solution of
each equation and use them for the columns of D. Then AD = I
3
.
25. By Exercise 23, the equation CA = I
n
implies that (number of rows in A) > (number of columns), that
is, m > n. By Exercise 24, the equation AD = I
m
implies that (number of rows in A) < (number of
columns), that is, m < n. Thus m = n. To prove the second statement, observe that CAD = (CA)D =
I
n
D = D, and also CAD = C(AD) = CI
m
= C. Thus C = D. A shorter calculation is
C =C I
n
= C(AD) = (CA)D = I
n
D= D
26. Take any b in R
m
. By hypothesis, ADb = I
m
b = b. Rewrite this equation as A(Db) = b. Thus, the
vector x = Db satisfies Ax = b. This proves that the equation Ax = b has a solution for each b in R
m
.
By Theorem 4 in Section 1.4, A has a pivot position in each row. Since each pivot is in a different
column, A must have at least as many columns as rows.
27. The product u
T
v is a 1×1 matrix, which usually is identified with a real number and is written
without the matrix brackets.
[]
32 5 3 2 5
T
a
babc
c
ªº
«»
=−−=+
«»
«»
¬¼
uv ,
[]
3
2325
5
T
abc a b c
ªº
«»
==+
«»
«»
¬¼
vu
[]
3333
2222
5555
T
abc
abc a b c
abc
−−
ªº ª º
«» « »
==
«» « »
«» « »
−−
¬¼ ¬ ¼
uv
[]
32 5
32 5 3 2 5
32 5
T
aaa a
bbb b
ccc c
−−
ªº ª º
«» « »
=−−=−−
«» « »
«» « »
−−
¬¼ ¬ ¼
vu
28. Since the inner product u
T
v is a real number, it equals its transpose. That is,
u
T
v = (u
T
v)
T
= v
T
(u
T
)
T
= v
T
u, by Theorem 3(d) regarding the transpose of a product of matrices and
by Theorem 3(a). The outer product uv
T
is an n×n matrix. By Theorem 3, (uv
T
)
T
= (v
T
)
T
u
T
= vu
T
.
29. The (i, j)-entry of A(B + C) equals the (i, j)-entry of AB + AC, because
111
()
nnn
ik kj kj ik kj ik kj
kkk
ab c ab ac
===
+= +
¦¦¦
The (i, j)-entry of (B + C)A equals the (i, j)-entry of BA + CA, because
111
()
nnn
ik ik kj ik kj ik kj
kkk
bca ba ca
===
+= +
¦¦¦
30. The (i, j))-entries of r(AB), (rA)B, and A(rB) are all equal, because
11 1
() ()
nn n
ik kj ik kj ik kj
kk k
rab rab arb
== =
==
¦¦ ¦
31. Use the definition of the product I
m
A and the fact that I
m
x = x for x in R
m
.
92 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
I
m
A = I
m
[a
1
a
n
] = [I
m
a
1
I
m
a
n
] = [a
1
a
n
] = A
32. Let e
j
and a
j
denote the jth columns of I
n
and A, respectively. By definition, the jth column of AI
n
is
Ae
j
, which is simply a
j
because e
j
has 1 in the jth position and zeros elsewhere. Thus corresponding
columns of AI
n
and A are equal. Hence AI
n
= A.
33. The (i, j)-entry of (AB)
T
is the ( j, i)-entry of AB, which is
11ji jnni
ab ab+⋅⋅⋅ +
The entries in row i of B
T
are b
1i
, … , b
ni
, because they come from column i of B. Likewise, the
entries in column j of A
T
are a
j1
, …, a
jn
, because they come from row j of A. Thus the (i, j)-entry in
B
T
A
T
is
11ji jnni
ab ab++"
, as above.
34. Use Theorem 3(d), treating x as an n×1 matrix: (ABx)
T
= x
T
(AB)
T
= x
T
B
T
A
T
.
35. [M] The answer here depends on the choice of matrix program. For MATLAB, use the help
command to read about zeros, ones, eye, and diag. For other programs see the
appendices in the Study Guide. (The TI calculators have fewer single commands that produce
special matrices.)
36. [M] The answer depends on the choice of matrix program. In MATLAB, the command
rand(5,6) creates a 5×6 matrix with random entries uniformly distributed between 0 and 1. The
command
round(19*(rand(4,4).5))
creates a random 4×4 matrix with integer entries between –9 and 9. The same result is produced by
the command randomint in the Laydata4 Toolbox on text website. For other matrix programs
see the appendices in the Study Guide.
37. [M] The equality AB = BA is very likely to be false for 4×4 matrices selected at random.
38. [M] (A + I)(A – I) – (A
2
I) = 0 for all 5×5 matrices. However, (A + B)(A – B) – A
2
B
2
is the zero
matrix only in the special cases when AB = BA. In general,
(A + B)(AB) = A(AB) + B(AB) = AAAB + BABB.
39. [M] The equality (A
T
+B
T
)=(A+B)
T
and (AB)
T
=B
T
A
T
should always be true, whereas (AB)
T
= A
T
B
T
is
very likely to be false for 4×4 matrices selected at random.
40. [M] The matrix Sshifts” the entries in a vector (a, b, c, d, e) to yield (b, c, d, e, 0). The entries in S
2
result from applying S to the columns of S, and similarly for S
3
, and so on. This explains the patterns
of entries in the powers of S:
2.2 Solutions 93
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
234
00100 00010 00001
00010 00001 00000
,,
00001 00000 00000
00000 00000 00000
00000 00000 00000
SSS
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
== =
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
S
5
is the 5×5 zero matrix. S
6
is also the 5×5 zero matrix.
41. [M]
510
.3339 .3349 .3312 .333341 .333344 .333315
.3349 .3351 .3300 , .333344 .333350 .333306
.3312 .3300 .3388 .333315 .333306 .333379
AA
ªºª º
«»« »
==
«»« »
«»« »
¬¼¬ ¼
The entries in A
20
all agree with .3333333333 to 8 or 9 decimal places. The entries in A
30
all agree
with .33333333333333 to at least 14 decimal places. The matrices appear to approach the matrix
1/3 1/3 1/3
1/3 1/3 1/3
1/3 1/3 1/3
ªº
«»
«»
«»
¬¼
. Further exploration of this behavior appears in Sections 4.9 and 5.2.
Note:
The MATLAB box in the Study Guide introduces basic matrix notation and operations,
including the commands that create special matrices needed in Exercises 35, 36 and elsewhere. The
Study Guide appendices treat the corresponding information for the other matrix programs.
2.2 SOLUTIONS
Notes:
The text includes the matrix inversion algorithm at the end of the section because this topic is
popular. Students like it because it is a simple mechanical procedure. However, I no longer cover it in my
classes because technology is readily available to invert a matrix whenever needed, and class time is
better spent on more useful topics such as partitioned matrices. The final subsection is independent of the
inversion algorithm and is needed for Exercises 35 and 36.
Key Exercises: 8, 11–24, 35. (Actually, Exercise 8 is only helpful for some exercises in this section.
Section 2.3 has a stronger result.) Exercises 23 and 24 are used in the proof of the Invertible Matrix
Theorem (IMT) in Section 2.3, along with Exercises 23 and 24 in Section 2.1. I recommend letting
students work on two or more of these four exercises before proceeding to Section 2.3. In this way
students participate in the proof of the IMT rather than simply watch an instructor carry out the proof.
Also, this activity will help students understand why the theorem is true.
1.
1
86 46 2 3
1
54 58 5/24
32 30
−−
ªº ª ºª º
==
«» « »« »
−−
¬¼ ¬ ¼¬ ¼
2.
1
32 5 2 52
1
85 8 3 8 3
15 16
−−
ªº ª ºª º
==
«» « »« »
−−
¬¼ ¬ ¼¬ ¼
3.
1
73 33 33 1 1
11
or
63 67 67 27/3
21 ( 18) 3
−− −−
ªº ªºªºª º
==
«» «»«»« »
−− −−
−−
¬¼ ¬¼¬¼¬ ¼
94 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
4.
()
1
24 64 64 3/2 1
11
or
46 42 42 11/2
12 16 4
−−
ªº ªºªºª º
==
«» «»«»« »
−−
−−
¬¼ ¬¼¬¼¬ ¼
5. The system is equivalent to Ax = b, where
86 2
and =
54 1
A
ª
ºªº
=
«
»«»
¬
¼¬¼
b, and the solution is
x = A
–1
b =
232 7
.
5/2 4 1 9
ªºªºªº
=
«»«»«»
−−
¬¼¬¼¬¼
Thus x
1
= 7 and x
2
= –9.
6. The system is equivalent to Ax = b, where
73 9
and
63 4
A
ª
ºªº
==
«
»«»
−−
¬
¼¬¼
b, and the solution is x = A
–1
b.
To compute this by hand, the arithmetic is simplified by keeping the fraction 1/det(A) in front of the
matrix for A
–1
. (The Study Guide comments on this in its discussion of Exercise 7.) From Exercise 3,
x = A
–1
b =
339 15 5
11
674 26 26/3
33
−−
ªºªºªºªº
==
«»«»«»«»
¬¼¬¼¬¼¬¼
. Thus x
1
= 5 and x
2
= 26/3.
7. a.
1
12 12 2 12 2 6 1
11
or
512 51 51 2.5.5
112 2 5 2
−− −
ªº ª ºª ºª º
==
«» « »« »« »
−− −
⋅−
¬¼ ¬ ¼¬ ¼¬ ¼
x = A
–1
b
1
=
12 2 1 18 9
11
513 8 4
22
−− − −
ªºªºªºªº
==
«»«»«»«»
¬¼¬¼¬¼¬¼
. Similar calculations give
111
234
11 613
,,
525
AAA
−−−
ªº ªº ªº
===
«» «» «»
−− −
¬¼ ¬¼ ¬¼
bbb
.
b. [A b
1
b
2
b
3
b
4
] =
12 1 123
512 3 5 6 5
ªº
«»
¬¼
12 1 1 2 3 12 1 1 2 3
~~
02 8 10 4 10 01 4 5 2 5
−−
ªºªº
«»«»
−−−−−
¬¼¬¼
10 911 613
~01 4 5 2 5
ªº
«»
−−−
¬¼
The solutions are
911 6 13
,,, and ,
452 5
ª
ºª ºª º ª º
«
»« »« » « »
−− −
¬
¼¬ ¼¬ ¼ ¬ ¼
the same as in part (a).
Note:
The Study Guide also discusses the number of arithmetic calculations for this Exercise 7, stating
that when A is large, the method used in (b) is much faster than using A
–1
.
8. Left-multiply each side of A = PBP
–1
by P
–1
:
P
–1
A = P
–1
PBP
–1
, P
–1
A = IBP
–1
, P
–1
A = BP
–1
Then right-multiply each side of the result by P:
P
–1
AP = BP
–1
P, P
–1
AP = BI, P
–1
AP = B
Parentheses are routinely suppressed because of the associative property of matrix multiplication.
9. a. True, by definition of invertible.
2.2 Solutions 95
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b. False. See Theorem 6(b).
c. False. If
11
00
Aªº
=«»
¬¼
, then ab – cd = 1 – 0 0, but Theorem 4 shows that this matrix is not
invertible, because ad – bc = 0.
d. True. This follows from Theorem 5, which also says that the solution of Ax = b is unique, for
each b.
e. True, by the box just before Example 6.
10. a. False. The last part of Theorem 7 is misstated here.
b. True, by Theorem 6(a).
c. False. The product matrix is invertible, but the product of inverses should be in the reverse order.
See Theorem 6(b).
d. True. See the subsection “Another View of Matrix Inversion”.
e. True, by Theorem 7.
11. (The proof can be modeled after the proof of Theorem 5.) The n×p matrix B is given (but is
arbitrary). Since A is invertible, the matrix A
–1
B satisfies AX = B, because A(A
–1
B) = A A
–1
B = IB =
B. To show this solution is unique, let X be any solution of AX = B. Then, left-multiplication of each
side by A
–1
shows that X must be A
–1
B:
A
–1
(AX) = A
–1
B, IX = A
–1
B, and X = A
–1
B.
12. Left-multiply each side of the equation AD = I by A
–1
to obtain
A
–1
AD = A
–1
I, ID = A
–1
, and D = A
–1
.
Parentheses are routinely suppressed because of the associative property of matrix multiplication.
13. Left-multiply each side of the equation AB = AC by A
–1
to obtain
A
–1
AB = A
–1
AC, IB = IC, and B = C.
This conclusion does not always follow when A is singular. Exercise 10 of Section 2.1 provides a
counterexample.
14. Right-multiply each side of the equation (B – C)D = 0 by D
–1
to obtain
(B – C)DD
–1
= 0D
–1
, (B – C)I = 0, B – C = 0, and B = C.
15. If you assign this exercise, consider giving the following Hint: Use elementary matrices and imitate
the proof of Theorem 7. The solution in the Instructor’s Edition follows this hint. Here is another
solution, based on the idea at the end of Section 2.2.
Write B = [b
1
b
p
] and X = [u
1
u
p
]. By definition of matrix multiplication,
AX = [Au
1
Au
p
]. Thus, the equation AX = B is equivalent to the p systems:
Au
1
= b
1
, … Au
p
= b
p
Since A is the coefficient matrix in each system, these systems may be solved simultaneously,
placing the augmented columns of these systems next to A to form [A b
1
b
p
] = [A B]. Since A
is invertible, the solutions u
1
, …, u
p
are uniquely determined, and [A b
1
b
p
] must row reduce to
[I u
1
u
p
] = [I X]. By Exercise 11, X is the unique solution A
–1
B of AX = B.
96 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
16. Let C = AB. Then CB
–1
= ABB
–1
, so CB
–1
= AI = A. This shows that A is the product of invertible
matrices and hence is invertible, by Theorem 6.
Note:
The Study Guide warns against using the formula (AB)
–1
= B
–1
A
–1
here, because this formula can
be used only when both A and B are already known to be invertible.
17. The box following Theorem 6 suggests what the inverse of ABC should be, namely, C
–1
B
–1
A
–1
. To
verify that this is correct, compute:
(ABC) C
–1
B
–1
A
–1
= ABCC
–1
B
–1
A
–1
= ABIB
–1
A
–1
= ABB
–1
A
–1
= AIA
–1
= AA
–1
= I
and
C
–1
B
–1
A
–1
(ABC) = C
–1
B
–1
A
–1
ABC = C
–1
B
–1
IBC = C
–1
B
–1
BC = C
–1
IC = C
–1
C = I
18. Right-multiply each side of AB = BC by B
–1
:
ABB
–1
= BCB
–1
, AI = BCB
–1
, A = BCB
–1
.
19. Unlike Exercise 18, this exercise asks two things, “Does a solution exist and what is it?” First, find
what the solution must be, if it exists. That is, suppose X satisfies the equation C
–1
(A + X)B
–1
= I.
Left-multiply each side by C, and then right-multiply each side by B:
CC
–1
(A + X)B
–1
= CI, I(A + X)B
–1
= C, (A + X)B
–1
B = CB, (A + X)I = CB
Expand the left side and then subtract A from both sides:
AI + XI = CB, A + X = CB, X = CBA
If a solution exists, it must be CBA. To show that CBA really is a solution, substitute it for X:
C
–1
[A + (CBA)]B
–1
= C
–1
[CB]B
–1
= C
–1
CBB
–1
= II = I.
Note:
The Study Guide suggests that students ask their instructor about how many details to include in
their proofs. After some practice with algebra, an expression such as CC
–1
(A + X)B
–1
could be simplified
directly to (A + X)B
–1
without first replacing CC
–1
by I. However, you may wish this detail to be included
in the homework for this section.
20. a. Left-multiply both sides of (AAX)
–1
= X
–1
B by X to see that B is invertible because it is the
product of invertible matrices.
b. Invert both sides of the original equation and use Theorem 6 about the inverse of a product
(which applies because X
–1
and B are invertible):
AAX = (X
–1
B)
–1
= B
–1
(X
–1
)
–1
= B
–1
X
Then A = AX + B
–1
X = (A + B
–1
)X. The product (A + B
–1
)X is invertible because A is invertible.
Since X is known to be invertible, so is the other factor, A + B
–1
, by Exercise 16 or by an
argument similar to part (a). Finally,
(A + B
–1
)
–1
A = (A + B
–1
)
–1
(A + B
–1
)X = X
Note:
This exercise is difficult. The algebra is not trivial, and at this point in the course, most students
will not recognize the need to verify that a matrix is invertible.
21. Suppose A is invertible. By Theorem 5, the equation Ax = 0 has only one solution, namely, the zero
solution. This means that the columns of A are linearly independent, by a remark in Section 1.7.
22. Suppose A is invertible. By Theorem 5, the equation Ax = b has a solution (in fact, a unique solution)
for each b. By Theorem 4 in Section 1.4, the columns of A span R
n
.
2.2 Solutions 97
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
23. Suppose A is n×n and the equation Ax = 0 has only the trivial solution. Then there are no free
variables in this equation, and so A has n pivot columns. Since A is square and the n pivot positions
must be in different rows, the pivots in an echelon form of A must be on the main diagonal. Hence A
is row equivalent to the n×n identity matrix.
24. If the equation Ax = b has a solution for each b in R
n
, then A has a pivot position in each row, by
Theorem 4 in Section 1.4. Since A is square, the pivots must be on the diagonal of A. It follows that A
is row equivalent to I
n
. By Theorem 7, A is invertible.
25. Suppose
ab
Acd
=
ªº
«»
¬¼
and ad – bc = 0. If a = b = 0, then examine
1
2
00 0
0
x
x
cd
ªº
ª
ºªº
=
«»
«
»«»
¬
¼¬¼
¬¼
This has the
solution x
1
=
d
c
ªº
«»
¬¼
. This solution is nonzero, except when a = b = c = d. In that case, however, A is
the zero matrix, and Ax = 0 for every vector x. Finally, if a and b are not both zero, set x
2
=
b
a
ªº
«»
¬¼
.
Then
2
0
0
ab b abba
Acd a cbda
−−+
===
+
ªºªºª ºªº
«»«»« »«»
¬¼¬¼¬ ¼¬¼
x
, because –cb + da = 0. Thus, x
2
is a nontrivial solution
of Ax = 0. So, in all cases, the equation Ax = 0 has more than one solution. This is impossible when A
is invertible (by Theorem 5), so A is not invertible.
26.
0
0
dbabdabc
cacd cbad
−−
ªºªºª º
=
«»«»« »
−−+
¬¼¬¼¬ ¼
. Divide both sides by ad – bc to get CA = I.
0
0
ab d b adbc
cd c a cbda
−−
ªºª ºª º
=
«»« »« »
−−+
¬¼¬ ¼¬ ¼
.
Divide both sides by ad – bc. The right side is I. The left side is AC, because
11
ab d b ab d b
cd c a cd c a
ad bc ad bc
−−
ªºª ºªº ª º
=
«»« »«» « »
−−
−−
¬¼¬ ¼¬¼ ¬ ¼
= AC
27. a. Interchange A and B in equation (1) after Example 6 in Section 2.1: row
i
(BA) = row
i
(B)A. Then
replace B by the identity matrix: row
i
(A) = row
i
(IA) = row
i
(I)A.
b. Using part (a), when rows 1 and 2 of A are interchanged, write the result as
22 2
11 1
33 3
row ( ) row ( ) row ( )
row ( ) row ( ) row ( )
row ( ) row ( ) row ( )
AIAI
AIAIAEA
AIAI
ªºª ºªº
«»« »«»
===
«»« »«»
«»« »«»
¬¼¬ ¼¬¼
(*)
Here, E is obtained by interchanging rows 1 and 2 of I. The second equality in (*) is a
consequence of the fact that row
i
(EA) = row
i
(E)¸A.
c. Using part (a), when row 3 of A is multiplied by 5, write the result as
11 1
22 2
33 3
row ( ) row ( ) row ( )
row ( ) row ( ) row ( )
5row( ) 5row() 5row()
AIAI
AIAIAEA
AIAI
ªºª ºªº
«»« »«»
===
«»« »«»
«»« »«»
⋅⋅
¬¼¬ ¼¬¼
Here, E is obtained by multiplying row 3 of I by 5.
98 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
28. When row 2 of A is replaced by row
2
(A) – 3row
1
(A), write the result as
11
212 1
33
row ( ) row ( )
row ( ) 3 row ( ) row ( ) 3 row ( )
row ( ) row ( )
AIA
AAIAIA
AIA
ªºªº
«»«»
−⋅ =⋅−⋅ ⋅
«»«»
«»«»
¬¼¬¼
11
21 21
33
row ( ) row ( )
[row ( ) 3 row ( )] row ( ) 3 row ( )
row ( ) row ( )
IA I
IIAIIAEA
IA I
ªºªº
«»«»
=−⋅ =−⋅ =
«»«»
«»«»
¬¼¬¼
Here, E is obtained by replacing row
2
(I) by row
2
(I) – 3row
1
(I).
29.
1310131010 3110 31
[] ~ ~ ~
490103410341014/31/3
AI
−− − −
ªºªºªºªº
=«»«»«»«»
−−
¬¼¬¼¬¼¬¼
A
–1
=
31
4/3 1/3
ªº
«»
¬¼
30.
36 1 0 121/30 1 2 1/3 0
[] ~ ~
47 0 1 47 0 1 0 1 4/3 1
AI ªºª ºª º
=«»« »« »
−−
¬¼¬ ¼¬ ¼
1
12 1/3 0 1 0 7/3 2 7/3 2
~~ .
014/3 1 0 1 4/3 1 4/3 1
A
−−
ªºª ºªº
=
«»« »«»
−−
¬¼¬ ¼¬¼
31.
102100 102 100
[]314010~012310
234001038201
AI
−−
ªºª º
«»« »
=−−
«»« »
«»« »
−−
¬¼¬ ¼
10 2 100 100 831
~0 1 2 3 1 0~0 1 0 10 4 1
00 2731 002731
ªºªº
«»«»
«»«»
«»«»
¬¼¬¼
1
100 8 3 1 8 3 1
~0 1 0 10 4 1 . 10 4 1
0017/23/21/2 7/23/21/2
A
ªºª º
«»« »
=
«»« »
«»« »
¬¼¬ ¼
32.
121100 121100
[]473010~011410
264001 022201
AI
−−
ªºªº
«»«»
=−− −
«»«»
«»«»
−− −
¬¼¬¼
12 1 1 0 0
~0 1 1 4 1 0
00 010 2 1
ªº
«»
«»
«»
¬¼
. The matrix A is not invertible.
2.2 Solutions 99
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
33. Let B =
100 0
110 0
011
00 11
ªº
«»
«»
«»
«»
«»
«»
¬¼
"
#%%#
"
, and for j = 1, …, n, let a
j
, b
j
, and e
j
denote the jth columns of A, B,
and I, respectively. Note that for j = 1, …, n 1, a
j
– a
j+1
= e
j
(because a
j
and a
j+1
have the same
entries except for the jth row), b
j
= e
j
e
j+1
and a
n
= b
n
= e
n
.
To show that AB = I, it suffices to show that Ab
j
= e
j
for each j. For j = 1, …, n – 1,
Ab
j
= A(e
j
e
j+1
) = Ae
j
Ae
j+1
= a
j
a
j+1
= e
j
and Ab
n
= Ae
n
= a
n
= e
n
. Next, observe that a
j
= e
j
+ + e
n
for each j. Thus,
Ba
j
= B(e
j
+ + e
n
) = b
j
+ + b
n
= (e
j
e
j+1
) + (e
j+1
e
j+2
) + + (e
n–1
e
n
) + e
n
= e
j
This proves that BA = I. Combined with the first part, this proves that B = A
–1
.
Note:
Students who do this problem and then do the corresponding exercise in Section 2.4 will appreciate
the Invertible Matrix Theorem, partitioned matrix notation, and the power of a proof by induction.
34. Let
A =
100 0 1 0 0 0
220 0 1 1/2 0
, and
33 3 0 0 1/21/3
00 1/(1)1/
B
nn n n nn
ªºª º
«»« »
«»« »
«»« »
=
«»« »
«»« »
«»« »
−−
¬¼¬ ¼
""
#%# #%%#
"
and for j = 1, …, n, let a
j
, b
j
, and e
j
denote the jth columns of A, B, and I, respectively. Note that for
j = 1, …, n–1, a
j
= je
j
+(j+1)e
j+1
+n e
n
, a
n
= n e
n
, b
j
=
1
1()
jj
j
+
ee , and
1.
nn
n
=be
To show that AB = I, it suffices to show that Ab
j
= e
j
for each j. For j = 1, …, n–1,
Ab
j
= A
1
1()
jj
j
+
§·
¨¸
©¹
ee =
1
1()
jj
j
+
aa
=
()
jj1 j 1 jj
11
(1e ) ((1)e ) .
nn
jj n j n j
jj
++
ªº
++ ++ + ++ = =
¬¼
ee eee!!
Also, Ab
n
= 11
nnn
Ann
§·
==
¨¸
©¹
eae.
Moreover,
()
jj1 1
1B (1)
jnj j n
BjBj nBj j n
++
=++ ++ =++ ++ae e eb b b!!
=
jj+1 j+1j+2 1 j
()( )( ) .
nnn
+++ +=ee e e e e e e!
which proves that BA = I. Combined with the first part, this proves that B = A
–1
.
Note:
If you assign Exercise 34, you may wish to supply a hint using the notation from Exercise 33:
Express each column of A in terms of the columns e
1
, …, e
n
of the identity matrix. Do the same for B.
100 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
35. Row reduce [A e
3
]:
1730 1730 1730 1030 1003
215 6 0~0 1 00~01 00~010 0~010 0
1321 0411 0011 0011 0011
−− −
ªºªºªºªºªº
«»«»«»«»«»
«»«»«»«»«»
«»«»«»«»«»
−− − −
¬¼¬¼¬¼¬¼¬¼
Answer: The third column of A
–1
is
3
0.
1
ªº
«»
«»
«»
¬¼
36. [M] Write B = [A F], where F consists of the last two columns of I
3
, and row reduce:
B =
25 9 27 0 0
536 185 537 1 0
154 52 143 0 1
−−
ªº
«»
«»
«»
¬¼
100 .1126 .1559
~0 1 0 .5611 1.0077
001 .0828 .1915
ªº
«»
«»
«»
¬¼
The last two columns of A
–1
are
.1126 .1559
~.56111.0077
.0828 .1915
ªº
«»
«»
«»
¬¼
37. There are many possibilities for C, but C =
11 1
11 0
ª
º
«
»
¬
¼
is the only one whose entries are 1, 1,
and 0. With only three possibilities for each entry, the construction of C can be done by trial and
error. This is probably faster than setting up a system of 4 equations in 6 unknowns. The fact that A
cannot be invertible follows from Exercise 25 in Section 2.1, because A is not square.
38. Write AD = A[d
1
d
2
] = [Ad
1
Ad
2
]. The structure of A shows that D =
11
01
00
00
ª
º
«
»
«
»
«
»
«
»
¬
¼
and D =
10
11
11
01
ªº
«»
«»
«»
«»
¬¼
are
two possibilities. There are 9 possible answers. However, there is no 4×2 matrix C such that CA =
I
4
. If this were true, then CAx would equal x for all x in R
4
. This cannot happen because the columns
of A are linearly dependent and so Ax = 0 for some nonzero vector x. For such an x,
CAx = C(0) = 0. An alternate justification would be to cite Exercise 23 or 25 in Section 2.1.
39. y = Df =
.011 .003 .001 40 .62
.003 .009 .003 50 .66
.001 .003 .011 30 .52
ªºªºªº
«»«»«»
=
«»«»«»
«»«»«»
¬¼¬¼¬¼
. The deflections are .62 in., .66 in., and .52 in. at points
1, 2, and 3, respectively.
40. [M] The stiffness matrix is D
–1
. Use an “inverse” command to produce
2.3 Solutions 101
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
D
–1
=
310
100 14 1
3013
ªº
«»
−−
«»
«»
¬¼
To find the forces (in pounds) required to produce a deflection of .04 cm at point 3, most students
will use technology to solve Df = (0, 0, .04) and obtain (0, 4/3, 4).
Here is another method, based on the idea suggested in Exercise 42. The first column of D
–1
lists the
forces required to produce a deflection of 1 in. at point 1 (with zero deflection at the other points).
Since the transformation y 6 D
–1
y is linear, the forces required to produce a deflection of .04 cm at
point 3 is given by .04 times the third column of D
–1
, namely (.04)(100/3) times (0, 1, 3), or (0,
4/3, 4) pounds.
41. To determine the forces that produce deflections of .07, .12, .16, and .12 cm at the four points on the
beam, use technology to solve Df = y, where y = (.07, .12, .16, .12). The forces at the four points are
.95, 6.19, 11.43, and 3.81 newtons, respectively.
42. [M] To determine the forces that produce a deflection of .22 cm at the second point on the beam, use
technology to solve Df = y, where y = (0, .22, 0, 0). The forces at the four points are –10.476,
31.429,
–10.476, and 0 newtons, respectively (to three significant digits). These forces are .22 times the
entries in the second column of D
–1
. Reason: The transformation
1
D
yy6
is linear, so the forces
required to produce a deflection of .22 cm at the second point are .22 times the forces required to
produce a deflection of 1 cm at the second point. These forces are listed in the second column of D
–1
.
Another possible discussion: The solution of Dx = (0, 1, 0, 0) is the second column of D
–1
.
Multiply both sides of this equation by .22 to obtain D(.22x) = (0, .22, 0, 0). So .22x is the solution
of Df = (0, .22, 0, 0). (The argument uses linearity, but students may not mention this.)
Note:
The Study Guide suggests using gauss, swap, bgauss, and scale to reduce [A I]
because I prefer to postpone the use of ref (or rref) until later. If you wish to introduce ref now,
see the Study Guide’s technology notes for Sections 2.8 or 4.3. (Recall that Sections 2.8 and 2.9 are only
covered when an instructor plans to skip Chapter 4 and get quickly to eigenvalues.)
2.3 SOLUTIONS
Notes:
This section ties together most of the concepts studied thus far. With strong encouragement from
an instructor, most students can use this opportunity to review and reflect upon what they have learned,
and form a solid foundation for future work. Students who fail to do this now usually struggle throughout
the rest of the course. Section 2.3 can be used in at least three different ways.
(1) Stop after Example 1 and assign exercises only from among the Practice Problems and Exercises
1to 28. I do this when teaching “Course 3” described in the text's “Notes to the Instructor. ” If you did not
cover Theorem 12 in Section 1.9, omit statements (f) and (i) from the Invertible Matrix Theorem.
(2) Include the subsection “Invertible Linear Transformations” in Section 2.3, if you covered Section
1.9. I do this when teaching “Course 1” because our mathematics and computer science majors take this
class. Exercises 29–40 support this material.
(3) Skip the linear transformation material here, but discusses the condition number and the
Numerical Notes. Assign exercises from among 1–28 and 41–45, and perhaps add a computer project on
102 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
the condition number. (See the projects on our web site.) I do this when teaching “Course 2” for our
engineers.
The abbreviation IMT (here and in the Study Guide) denotes the Invertible Matrix Theorem (Theorem
8).
1. The columns of the matrix
57
36
ªº
«»
−−
¬¼
are not multiples, so they are linearly independent. By (e) in
the IMT, the matrix is invertible. Also, the matrix is invertible by Theorem 4 in Section 2.2 because
the determinant is nonzero.
2. The fact that the columns of
42
63
ªº
«»
¬¼
are multiples of each other is one way to show that this matrix
is not invertible. Another is to check the determinant. In this case it is easily seen to be zero. By
Theorem 4 in Section 2.2, the matrix is not invertible.
3. Row reduction to echelon form is trivial because there is really no need for arithmetic calculations:
300 300 500
340~040~040
853 053 003
ªºªºªº
«»«»«»
−− − −
«»«»«»
«»«»«»
−−−
¬¼¬¼¬¼
The 3×3 matrix has 3 pivot positions and hence is
invertible, by (c) of the IMT. [Another explanation could be given using the transposed matrix. But
see the note below that follows the solution of Exercise 14.]
4. The matrix
514
000
149
ªº
«»
«»
«»
¬¼
cannot row reduce to the identity matrix since it already contains a row of
zeros. Hence the matrix is not invertible (or singular) by (b) in the IMT.
5. The matrix
30 3
20 4
40 7
ªº
«»
«»
«»
¬¼
obviously has linearly dependent columns (because one column is zero),
and so the matrix is not invertible (or singular) by (e) in the IMT.
6.
136 13 6 136 13 6
043~04 3~043~04 3
360 0318 016 0021
−− − − −− −
ªºªºªºªº
«»«»«»«»
«»«»«»«»
«»«»«»«»
−−− −
¬¼¬¼¬¼¬¼
The matrix is invertible because it is row equivalent to the identity matrix.
7.
1301 1301 1301
3583 0480 0480
~~
2632 0030 0030
0121 0121 0001
−− −− −−
ªºªºªº
«»«»«»
−− −
«»«»«»
«»«»«»
−−
«»«»«»
−−
«»«»«»
¬¼¬¼¬¼
The 4×4 matrix has four pivot positions and so is invertible by (c) of the IMT.
2.3 Solutions 103
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
8. The 4×4 matrix
3474
0146
0028
0001
ªº
«»
«»
«»
«»
¬¼
is invertible because it has four pivot positions, by (c) of the IMT.
9. [M] Using technology,
4037 1000
6999 0100
~
751019 0010
124 1 0001
−−
ªºªº
«»«»
«»«»
«»«»
«»«»
−−
¬¼¬¼
.
The 4×4 matrix is invertible because it has four pivot positions, by (c) of the IMT.
10. [M]
531 7 9 5 3 1 7 9
642 8 8 0 .4 .8 .4 18.8
~
75310 9 0 .81.6 .2 3.6
964 9 5 0 .62.2 21.6 21.2
85211 4 0 .2 .4 .2 10.4
ªºª º
«»« »
−−
«»« »
«»« »
«»« »
−− − −
«»« »
«»« »
−−
¬¼¬ ¼
531 7 9 531 7 9
0 .4 .8 .4 18.8 0 .4 .8 .4 18.8
~~
000 1 34 001 21 7
001 21 7 000 1 34
000 0 1 000 0 1
ªºªº
«»«»
−− −−
«»«»
«»«»
«»«»
«»«»
«»«»
−−
¬¼¬¼
The 5×5 matrix is invertible because it has five pivot positions, by (c) of the IMT.
11. a. True, by the IMT. If statement (d) of the IMT is true, then so is statement (b).
b. True. If statement (h) of the IMT is true, then so is statement (e).
c. False. Statement (g) of the IMT is true only for invertible matrices.
d. True, by the IMT. If the equation Ax = 0 has a nontrivial solution, then statement (d) of the IMT
is false. In this case, all the lettered statements in the IMT are false, including statement (c),
which means that A must have fewer than n pivot positions.
e. True, by the IMT. If A
T
is not invertible, then statement (1) of the IMT is false, and hence
statement (a) must also be false.
12. a. True. If statement (k) of the IMT is true, then so is statement ( j). Use the first box after the IMT.
b. False. Notice that (i) if the IMT uses the work onto rather than the word into.
c. True. If statement (e) of the IMT is true, then so is statement (h).
d. False. Since (g) if the IMT is true, so is (f).
e. False, by the IMT. The fact that there is a b in
n
such that the equation Ax = b is consistent, does
not imply that statement (g) of the IMT is true, and hence there could be more than one solution.
Note:
The solutions below for Exercises 13–30 refer mostly to the IMT. In many cases, however, part or
all of an acceptable solution could also be based on various results that were used to establish the IMT.
104 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
13. If a square upper triangular n×n matrix has nonzero diagonal entries, then because it is already in
echelon form, the matrix is row equivalent to I
n
and hence is invertible, by the IMT. Conversely, if
the matrix is invertible, it has n pivots on the diagonal and hence the diagonal entries are nonzero.
14. If A is lower triangular with nonzero entries on the diagonal, then these n diagonal entries can be
used as pivots to produce zeros below the diagonal. Thus A has n pivots and so is invertible, by the
IMT. If one of the diagonal entries in A is zero, A will have fewer than n pivots and hence be
singular.
Notes:
For Exercise 14, another correct analysis of the case when A has nonzero diagonal entries is to
apply the IMT (or Exercise 13) to A
T
. Then use Theorem 6 in Section 2.2 to conclude that since A
T
is
invertible so is its transpose, A. You might mention this idea in class, but I recommend that you not spend
much time discussing A
T
and problems related to it, in order to keep from making this section too lengthy.
(The transpose is treated infrequently in the text until Chapter 6.)
If you do plan to ask a test question that involves A
T
and the IMT, then you should give the students
some extra homework that develops skill using A
T
. For instance, in Exercise 14 replace “columns” by
“rows.” Also, you could ask students to explain why an n×n matrix with linearly independent columns
must also have linearly independent rows.
15. Part (h) of the IMT shows that a 4×4 matrix cannot be invertible when its columns do not span R
4
.
16. If A is invertible, so is A
T
, by (l) of the IMT. By (e) of the IMT applied to A
T
, the columns of A
T
are
linearly independent.
17. If A has two identical columns then its columns are linearly dependent. Part (e) of the IMT shows that
A cannot be invertible.
18. If A contains two identical rows, then it cannot be row reduced to the identity because subtracting
one row from the other creates a row of zeros. By (b) of the IMT, such a matrix cannot be invertible.
19. By (e) of the IMT, D is invertible. Thus the equation Dx = b has a solution for each b in R
7
, by (g) of
the IMT. Even better, the equation Dx = b has a unique solution for each b in R
7
, by Theorem 5 in
Section 2.2. (See the paragraph following the proof of the IMT.)
20. By (g) of the IMT, A is invertible. Hence, each equation Ax = b has a unique solution, by Theorem 5
in Section 2.2. This fact was pointed out in the paragraph following the proof of the IMT.
21. The matrix C cannot be invertible, by Theorem 5 in Section 2.2 or by the box following the IMT. So
(h) of the IMT is false and the columns of C do not span R
n
.
22. By the box following the IMT, E and F are invertible and are inverses. So FE = I = EF, and so E and
F commute.
23. Statement (g) of the IMT is false for F, so statement (d) is false, too. That is, the equation Fx = 0 has
a nontrivial solution.
24. Statement (b) of the IMT is false for G, so statements (e) and (h) are also false. That is, the columns
of G are linearly dependent and the columns do not span R
n
.
25. Suppose that A is square and AB = I. Then A is invertible, by the (k) of the IMT. Left-multiplying
each side of the equation AB = I by A
–1
, one has
A
–1
AB = A
–1
I, IB = A
–1
, and B = A
–1
.
2.3 Solutions 105
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
By Theorem 6 in Section 2.2, the matrix B (which is A
–1
) is invertible, and its inverse is (A
–1
)
–1
,
which is A.
26. If the columns of A are linearly independent, then since A is square, A is invertible, by the IMT. So
A
2
, which is the product of invertible matrices, is invertible. By the IMT, the columns of A
2
span R
n
.
27. Let W be the inverse of AB. Then ABW = I and A(BW) = I. Since A is square, A is invertible, by (k) of
the IMT.
Note:
The Study Guide for Exercise 27 emphasizes here that the equation A(BW) = I, by itself, does not
show that A is invertible. Students are referred to Exercise 38 in Section 2.2 for a counterexample.
Although there is an overall assumption that matrices in this section are square, I insist that my students
mention this fact when using the IMT. Even so, at the end of the course, I still sometimes find a student
who thinks that an equation AB = I implies that A is invertible.
28. Let W be the inverse of AB. Then WAB = I and (WA)B = I. By (j) of the IMT applied to B in place of
A, the matrix B is invertible.
29. Since the transformation
A
xx6
is one-to-one, statement (f) of the IMT is true. Then (i) is also true
and the transformation Axx6 does map R
n
onto R
n
. Also, A is invertible, which implies that the
transformation Axx6 is invertible, by Theorem 9.
30. Since the transformation Axx6 is not one-to-one, statement (f) of the IMT is false. Then (i) is also
false and the transformation Axx6 does not map R
n
onto R
n
. Also, A is not invertible, which
implies that the transformation Axx6 is not invertible, by Theorem 9.
31. Since the equation Ax = b has a solution for each b, the matrix A has a pivot in each row (Theorem 4
in Section 1.4). Since A is square, A has a pivot in each column, and so there are no free variables in
the equation Ax = b, which shows that the solution is unique.
Note:
The preceding argument shows that the (square) shape of A plays a crucial role. A less revealing
proof is to use the “pivot in each row” and the IMT to conclude that A is invertible. Then Theorem 5 in
Section 2.2 shows that the solution of Ax = b is unique.
32. If Ax = 0 has only the trivial solution, then A must have a pivot in each of its n columns. Since A is
square (and this is the key point), there must be a pivot in each row of A. By Theorem 4 in Section
1.4, the equation Ax = b has a solution for each b in R
n
.
Another argument: Statement (d) of the IMT is true, so A is invertible. By Theorem 5 in Section
2.2, the equation Ax = b has a (unique) solution for each b in R
n
.
33. (Solution in Study Guide) The standard matrix of T is
59
,
47
A
ª
º
=
«
»
¬
¼
which is invertible because
det A 0. By Theorem 9, the transformation T is invertible and the standard matrix of T
–1
is A
–1
.
From the formula for a 2×2 inverse,
1
79
.
45
A
ª
º
=
«
»
¬
¼
So
()
1
1
12 1 2 1 2
2
79
(, ) 7 9,4 5
45
x
Txx x x x x
x
ªº
ªº
==++
«»«»
¬¼¬¼
106 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
34. The standard matrix of T is
28
,
27
A
ªº
=«»
¬¼
which is invertible because det A = -2 0. By Theorem
9, T is invertible, and
1()
T
x = Bx, where
1
78
1
22
2
BA
ª
º
==
«
»
¬
¼
. Thus
1
1
12 1 2 1 2
2
78
17
(, ) 4,
22
22
x
Txx x x xx
x
ªº
ªº §·
==−− −
¨¸
«»
«» ©¹
¬¼
¬¼
35. (Solution in Study Guide) To show that T is one-to-one, suppose that T(u) = T(v) for some vectors u
and v in R
n
. Then S(T(u)) = S(T(v)), where S is the inverse of T. By Equation (1), u = S(T(u)) and
S(T(v)) = v, so u = v. Thus T is one-to-one. To show that T is onto, suppose y represents an arbitrary
vector in R
n
and define x = S(y). Then, using Equation (2), T(x) = T(S(y)) = y, which shows that T
maps R
n
onto R
n
.
Second proof: By Theorem 9, the standard matrix A of T is invertible. By the IMT, the columns of A
are linearly independent and span R
n
. By Theorem 12 in Section 1.9, T is one-to-one and maps R
n
onto R
n
.
36. Let A be the standard matrix of T. By hypothesis, T is not a one-to-one mapping. So, by Theorem 12
in Section 1.9, the standard matrix A of T has linearly dependent columns. Since A is square, the
columns of A do not span R
n
. By Theorem 12, again, T cannot map R
n
onto R
n
.
37. Let A and B be the standard matrices of T and U, respectively. Then AB is the standard matrix of the
mapping (())TUxx6, because of the way matrix multiplication is defined (in Section 2.1). By
hypothesis, this mapping is the identity mapping, so AB = I. Since A and B are square, they are
invertible, by the IMT, and B = A
–1
. Thus, BA = I. This means that the mapping (())UTxx6is the
identity mapping, i.e., U(T(x)) = x for all x in R
n
.
38. Given any v in R
n
, we may write v = T(x) for some x, because T is an onto mapping. Then, the
assumed properties of S and U show that S(v) = S(T(x)) = x and U(v) = U(T(x)) = x. So S(v) and U(v)
are equal for each v. That is, S and U are the same function from R
n
into R
n
.
39. If T maps R
n
onto R
n
, then the columns of its standard matrix A span R
n
, by Theorem 12 in Section
1.9. By the IMT, A is invertible. Hence, by Theorem 9 in Section 2.3, T is invertible, and A
–1
is the
standard matrix of T
–1
. Since A
–1
is also invertible, by the IMT, its columns are linearly independent
and span R
n
. Applying Theorem 12 in Section 1.9 to the transformation T
–1
, we conclude that T
–1
is a
one-to-one mapping of R
n
onto R
n
.
40. Given u, v in
n
, let x = S(u) and y = S(v). Then T(x)=T(S(u)) = u and T(y) = T(S(v)) = v, by
equation (2). Hence
()(()())
(( )) Because islinear
By equation (1)
() ()
SSTT
ST T
SS
+= +
=+
=+
=+
uv x y
xy
xy
uv
So, S preserves sums. For any scalar r,
2.3 Solutions 107
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
() (()) (()) Becauseislinear
Byequation (1)
()
Sr SrT STr T
r
rS
==
=
=
ux x
x
u
So S preserves scalar multiples. Thus S is a linear transformation.
41. [M] a. The exact solution of (3) is x
1
= 3.94 and x
2
= .49. The exact solution of (4) is x
1
= 2.90 and
x
2
= 2.00.
b. When the solution of (4) is used as an approximation for the solution in (3) , the error in using the
value of 2.90 for x
1
is about 26%, and the error in using 2.0 for x
2
is about 308%.
c. The condition number of the coefficient matrix is 3363. The percentage change in the solution
from (3) to (4) is about 7700 times the percentage change in the right side of the equation. This is
the same order of magnitude as the condition number. The condition number gives a rough
measure of how sensitive the solution of Ax = b can be to changes in b. Further information about
the condition number is given at the end of Chapter 6 and in Chapter 7.
Note:
See the Study Guide’s MATLAB box, or a technology appendix, for information on condition
number. Only the TI-83+ and TI-89 lack a command for this.
42. [M] MATLAB gives cond(A) 10, which is approximately 10
1
. If you make several trials with
MATLAB, which records 16 digits accurately, you should find that x and x
1
agree to at least 14 or 15
significant digits. So about 1 significant digit is lost. Here is the result of one experiment. The
vectors were all computed to the maximum 16 decimal places but are here displayed with only four
decimal places:
.9501
.2311
rand(4,1) .6068
.4860
ª
º
«
»
«
»
==
«
»
«
»
¬
¼
x
, b = Ax =
1.4219
6.2149
20.7973
1.4535
ª
º
«
»
«
»
«
»
«
»
¬
¼
. The MATLAB solution is x
1
= A\b =
.9501
.2311
.6068
.4860
ªº
«»
«»
«»
«»
¬¼
.
However, xx
1
=
.2220
.2220
0
.1665
ªº
«»
«»
«»
«»
¬¼
×10
–15
. The computed solution x
1
is accurate to about
14 decimal places.
43. [M] MATLAB gives cond(A) = 69,000. Since this has magnitude between 10
4
and 10
5
, the
estimated accuracy of a solution of Ax = b should be to about four or five decimal places less than
the 16 decimal places that MATLAB usually computes accurately. That is, one should expect the
solution to be accurate to only about 11 or 12 decimal places. Here is the result of one experiment.
The vectors were all computed to the maximum 16 decimal places but are here displayed with only
four decimal places:
x = rand(5,1) =
.8214
.4447
.6154
.7919
.9218
ª
º
«
»
«
»
«
»
«
»
«
»
«
»
¬
¼
, b = Ax =
19.8965
6.8991
26.0354
0.7861
22.4242
ª
º
«
»
«
»
«
»
«
»
«
»
«
»
¬
¼
. The MATLAB solution is x
1
= A\b =
.8214
.4447
.6154
.7919
.9218
ªº
«»
«»
«»
«»
«»
«»
¬¼
.
108 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
However, x – x
1
=
11
1679
.3578
10
.1775
.0084
.0002
ªº
«»
«»
«»
×
«»
«»
«»
¬¼
. The computed solution x
1
is accurate to about 11 decimal
places.
44. [M] Solve Ax = (0, 0, 0, 0, 1). MATLAB shows that
5
cond( ) 4.8 10 .A≈×
Since MATLAB
computes numbers accurately to 16 decimal places, the entries in the computed value of x should be
accurate to at least 11 digits. The exact solution is (630, –12600, 56700, –88200, 44100).
45. [M] Some versions of MATLAB issue a warning when asked to invert a Hilbert matrix of order 12
or larger using floating-point arithmetic. The product AA
–1
should have several off-diagonal entries
that are far from being zero. If not, try a larger matrix.
Note:
All matrix programs supported by the Study Guide have data for Exercise 45, but only MATLAB
and Maple have a single command to create a Hilbert matrix.
Notes:
The Study Guide for Section 2.3 organizes the statements of the Invertible Matrix Theorem in a
table that imbeds these ideas in a broader discussion of rectangular matrices. The statements are arranged
in three columns: statements that are logically equivalent for any m×n matrix and are related to existence
concepts, those that are equivalent only for any n×n matrix, and those that are equivalent for any n×p
matrix and are related to uniqueness concepts. Four statements are included that are not in the text’s
official list of statements, to give more symmetry to the three columns. You may or may not wish to
comment on them.
I believe that students cannot fully understand the concepts in the IMT if they do not know the correct
wording of each statement. (Of course, this knowledge is not sufficient for understanding.) The Study
Guide’s Section 2.3 has an example of the type of question I often put on an exam at this point in the
course. The section concludes with a discussion of reviewing and reflecting, as important steps to a
mastery of linear algebra.
2.4 SOLUTIONS
Notes:
Partitioned matrices arise in theoretical discussions in essentially every field that makes use of
matrices. The Study Guide mentions some examples (with references).
Every student should be exposed to some of the ideas in this section. If time is short, you might omit
Example 4 and Theorem 10, and replace Example 5 by a problem similar to one in Exercises 1–10. (A
sample replacement is given at the end of these solutions.) Then select homework from Exercises 1–13,
15, and 21–24.
The exercises just mentioned provide a good environment for practicing matrix manipulation. Also,
students will be reminded that an equation of the form AB = I does not by itself make A or B invertible.
(The matrices must be square and the IMT is required.)
2.4 Solutions 109
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1. Apply the row-column rule as if the matrix entries were numbers, but for each product always write
the entry of the left block-matrix on the left.
000IABIACIBD A B
EICD EAICEBID EACEBD
++
ªºª ºª ºª º
==
«»« »« »« »
++ ++
¬¼¬ ¼¬ ¼¬ ¼
2. Apply the row-column rule as if the matrix entries were numbers, but for each product always write
the entry of the left block-matrix on the left.
000
000
EPQEPREQSEPEQ
FRS PFR QFS FRFS
++
ªºªºª ºª º
==
«»«»« »« »
++
¬¼¬¼¬ ¼¬ ¼
3. Apply the row-column rule as if the matrix entries were numbers, but for each product always write
the entry of the left block-matrix on the left.
000
000
IAB AIC BID CD
ICDIACIBDAB
++
ªºª ºª ºª º
==
«»« »« »« »
++
¬¼¬ ¼¬ ¼¬ ¼
4. Apply the row-column rule as if the matrix entries were numbers, but for each product always write
the entry of the left block-matrix on the left.
000
IWXIWYIXZ W X
EIY Z EWIY EXIZ EWY EXZ
++
ªºªºª ºª º
==
«»«»« »« »
−−++++
¬¼¬¼¬ ¼¬ ¼
5. Compute the left side of the equation:
00
0000
ABI AIBX A BY
CXYCIXCY
++
ªºª ºª º
=
«»« »« »
++
¬¼¬ ¼¬ ¼
Set this equal to the right side of the equation:
00
so that
00 00
ABX BY I ABX BY I
CZ CZ
++==
ªºªº
=
«»«» ==
¬¼¬¼
Since the (2, 1) blocks are equal, Z = C. Since the (1, 2) blocks are equal, BY = I. To proceed further,
assume that B and Y are square. Then the equation BY =I implies that B is invertible, by the IMT, and
Y = B
–1
. (See the boxed remark that follows the IMT.) Finally, from the equality of the (1, 1) blocks,
BX = –A, B
–1
BX = B
1
(–A), and X = –B
–1
A.
The order of the factors for X is crucial.
Note:
For simplicity, statements (j) and (k) in the Invertible Matrix Theorem involve square matrices
C and D. Actually, if A is n×n and if C is any matrix such that AC is the n×n identity matrix, then C must
be n×n, too. (For AC to be defined, C must have n rows, and the equation AC = I implies that C has n
columns.) Similarly, DA = I implies that D is n×n. Rather than discuss this in class, I expect that in
Exercises 5–8, when students see an equation such as BY = I, they will decide that both B and Y should be
square in order to use the IMT.
6. Compute the left side of the equation:
00 000 0
0
XA XABXCXA
YZBC YAZBY ZC YAZBZC
++
ªºªºª ºª º
==
«»«»« »« »
++ +
¬¼¬¼¬ ¼¬ ¼
Set this equal to the right side of the equation:
110 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
00 00
so that
00
XA IXA I
YA ZB ZC I YA ZB ZC I
==
ªºªº
=
«»«»
++= =
¬¼¬¼
To use the equality of the (1, 1) blocks, assume that A and X are square. By the IMT, the equation
XA =I implies that A is invertible and X = A
–1
. (See the boxed remark that follows the IMT.)
Similarly, if C and Z are assumed to be square, then the equation ZC = I implies that C is invertible,
by the IMT, and Z = C
–1
. Finally, use the (2, 1) blocks and right-multiplication by A
–1
:
YA = –ZB = –C
–1
B, YAA
–1
= (–C
–1
B)A
–1
, and Y = C
–1
BA
–1
The order of the factors for Y is crucial.
7. Compute the left side of the equation:
00 00 00
00
000
AZ
XXA B XZ I
YI YA IB YZ II
BI
ªº
++ ++
ªº ª º
«»
=
«» « »
«» ++ ++
¬¼ ¬ ¼
«»
¬¼
Set this equal to the right side of the equation:
00
so that
00
XA XZ I XA I XZ
YA B YZ I I YA B YZ I I
==
ªºªº
=
«»«»
++ +=+=
¬¼¬¼
To use the equality of the (1, 1) blocks, assume that A and X are square. By the IMT, the equation XA
=I implies that A is invertible and X = A
–1
. (See the boxed remark that follows the IMT) Also, X is
invertible. Since XZ = 0, X
–1
XZ = X
–1
0 = 0, so Z must be 0. Finally, from the equality of the (2, 1)
blocks, YA = –B. Right-multiplication by A
–1
shows that YAA
–1
= –BA
–1
and Y = –BA
–1
. The order of
the factors for Y is crucial.
8. Compute the left side of the equation:
00
000 00000
ABXY Z AXB AYB AZBI
IIXIYIZII
+++
ªºª ºª º
=
«»« »« »
++ +
¬¼¬ ¼¬ ¼
Set this equal to the right side of the equation:
00
00 00
AX AY AZ B I
II
+
ªºªº
=
«»«»
¬¼¬¼
To use the equality of the (1, 1) blocks, assume that A and X are square. By the IMT, the equation XA
=I implies that A is invertible and X = A
–1
. (See the boxed remark that follows the IMT. Since AY =
0, from the equality of the (1, 2) blocks, left-multiplication by A
–1
gives A
–1
AY = A
–1
0 = 0, so Y = 0.
Finally, from the (1, 3) blocks, AZ = –B. Left-multiplication by A
–1
gives A
–1
AZ = A
–1
(–B), and Z = –
A
–1
B. The order of the factors for Z is crucial.
Note:
The Study Guide tells students, “Problems such as 5–10 make good exam questions. Remember to
mention the IMT when appropriate, and remember that matrix multiplication is generally not
commutative.” When a problem statement includes a condition that a matrix is square, I expect my
students to mention this fact when they apply the IMT.
2.4 Solutions 111
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
9. Compute the left side of the equation:
11 12 11 21 31 12 22 32
21 21 22 21 11 21 31 21 12 22 32
31 31 32 31 11 21 31 31 12 22 32
00 00 00
000
000
IBBIBBBIBBB
AI BB ABIB BABIB B
AIBBABBIBABBIB
++ ++
ªºªºª º
«»«»« »
=++ ++
«»«»« »
«»«»« »
++ ++
¬¼¬¼¬ ¼
Set this equal to the right side of the equation:
11 12 11 12
21 11 21 21 12 22 22
31 11 31 31 12 32 32
0
0
BBCC
AB B AB B C
AB B AB B C
ªºªº
«»«»
++=
«»«»
«»«»
++
¬¼¬¼
so that
11 11 12 12
21 11 21 21 12 22 22
31 11 31 31 12 32 32
0
0
BC BC
AB B AB B C
AB B AB B C
==
+= +=
+= +=
Since the (2,1) blocks are equal,
21 11 21 21 11 21
0and .AB B AB B+= =
Since B
11
is invertible, right
multiplication by
11
11 21 21 11
gives .BABB
−−
=
Likewise since the (3,1) blocks are equal,
31 11 31 31 11 31
0and .AB B AB B+= =
Since B
11
is invertible, right multiplication by
11
11 31 31 11
gives .BABB
−−
=
Finally, from the (2,2) entries,
11
21 12 22 22 21 21 11 22 21 11 12 22
. Since , .AB B C A BB C BB B B
−−
+= ==+
10. Since the two matrices are inverses,
00 00 00
0000
00
II I
AI PI I
BDIQRI I
ªºªºªº
«»«»«»
=
«»«»«»
«»«»«»
¬¼¬¼¬¼
Compute the left side of the equation:
00 00 0 0 00 0 0000
00 000000
000
II IIPQIIRII
AI PI AIIP Q A II R A I I
BDIQRI BIDPIQB DIIRB D II
++ ++ ++
ªºªºª º
«»«»« »
=++ ++ ++
«»«»« »
«»«»« »
++ ++ ++
¬¼¬¼¬ ¼
Set this equal to the right side of the equation:
00 00
00 0
00
II
AP I I
BDPQ DR I I
ªºªº
«»«»
+=
«»«»
«»«»
++ +
¬¼¬¼
so that
00 00
000
00
II
AP I I
BDPQ DR I I
===
+= = =
++= += =
Since the (2,1) blocks are equal, 0andAP P A+= =. Likewise since the (3, 2) blocks are equal,
0and .DR R D+= = Finally, from the (3,1) entries, 0and .
B
DP Q Q B DP++= =−−
Since , ( )PAQBDA BDA==−− =+.
11. a. True. See the subsection Addition and Scalar Multiplication.
b. False. See the paragraph before Example 3.
112 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
12. a. False. The both AB and BA are defined.
b. False. The R
T
and Q
T
also need to be switched.
13. You are asked to establish an if and only if statement. First, supose that A is invertible,
and let
1
DE
AFG
ªº
=«»
¬¼
. Then
00
00
BDEBDBEI
CFG CFCG I
ªºª ºª ºªº
==
«»« »« »«»
¬¼¬ ¼¬ ¼¬¼
Since B is square, the equation BD = I implies that B is invertible, by the IMT. Similarly, CG = I
implies that C is invertible. Also, the equation BE = 0 imples that
1
E
B
=0 = 0. Similarly F = 0.
Thus
11
1
1
00
00
BDEB
ACEG C
ªº
ªºª º
===
«»
«»« »
¬¼¬ ¼
«»
¬¼
(*)
This proves that A is invertible only if B and C are invertible. For the if ” part of the statement,
suppose that B and C are invertible. Then (*) provides a likely candidate for
1
A
which can be used
to show that A is invertible. Compute:
11
11
00 0 0
00
00
BB BB I
CI
CCC
−−
−−
ªºª º
ªº ªº
==
«»« »
«» «»
¬¼ ¬¼
«»« »
¬¼¬ ¼
Since A is square, this calculation and the IMT imply that A is invertible. (Don’t forget this final
sentence. Without it, the argument is incomplete.) Instead of that sentence, you could add the
equation:
11
11
00 0 0
00
00
BBBB I
CI
CCC
−−
−−
ªº ª º
ªº ªº
==
«» « »
«» «»
¬¼ ¬¼
«» « »
¬¼ ¬ ¼
14. You are asked to establish an if and only if statement. First suppose that A is invertible. Example 5
shows that A
11
and A
22
are invertible. This proves that A is invertible only if A
11
A
22
are invertible. For
the if part of this statement, suppose that A
11
and A
22
are invertible. Then the formula in Example 5
provides a likely candidate for
1
A
which can be used to show that A is invertible . Compute:
111 1
111
11 11 12 11 11 12 22 12 22
11 12 11 11 12 22
111 1
1
22 22 11 12 22 22 22
22 11
11 1
11 11 12 22 12 22
11
12 22 12 22
0()
0000()
0
()
0
0
AA A A A A A A A
AAAAAA
AAA AAAAA
A
IAAAAAA
I
IAAAA
I
−−− −
−−
−− −
−− −
−−
ª
º
++
ªº
ªº
«
»
=
«»
«»
++
«
»
«»
¬¼
¬¼
¬
¼
ªº
+
=«»
¬¼
ªº
+
=«
¬
0
0
I
I
ªº
=
»«»
¬¼
¼
Since A is square, this calculation and the IMT imply that A is invertible.
15. The column-row expansions of G
k
and G
k+1
are:
11
...
col ( ) row ( ) col ( ) row ( )
T
kkk
TT
kk kkkk
GXX
XX X X
=
=++
2.4 Solutions 113
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
and
111
11 11 111111
11 11 1
11 1
...
col ( )row ( ) col ( ) row ( ) col ( ) row ( )
...
col ( ) row ( ) col ( ) row ( ) col ( ) row ( )
col ( ) row ( )
T
kkk
TTT
kk kkkkkkkk
TT T
kk kkkkkkkk
T
kkk kk
GXX
XX X X X X
XX X X X X
GX X
+++
++ + +++++
++ +
++ +
=
=+++
=+++
=+
since the first k columns of X
k+1
are identical to the first k columns of X
k
. Thus to update G
k
to
produce G
k+1
, the matrix col
k+1
(X
k+1
) row
k+1
()
T
k
X
should be added to G
k
.
16. Compute the right side of the equation:
11 11
11
11
11 11
11
0
00
00 0
AAY
A
IA IY IY
XA XA Y S
XA S
XI S I I
ª
º
ªº
ªºª ºªº ªº
==
«
»
«»«»« »«» «» +
¬¼¬ ¼¬¼ ¬¼
¬¼
¬
¼
Set this equal to the left side of the equation:
11 11
11 11 11 12 11 12
11 21
11 11 21 22 11 22
so that
AA
AAYAA AYA
XA A
XA XA Y S A A XA Y S A
==
ªºªº
=
«»«» =
++=
¬¼¬¼
Since the (1, 2) blocks are equal,
11 12.AY A=
Since A
11
is invertible, left multiplication by
1
11
A
gives
Y =
1
11 12
.AA
Likewise since the (2,1) blocks are equal, X A
11
= A
21
. Since A
11
is invertible, right
multiplication by
1
11
A
gives that
1
21 11
.XAA
=
One can check that the matrix S as given in the exercise
satisfies the equation
11 22
XA Y S A+=
with the calculated values of X and Y given above.
17. Suppose that A and A
11
are invertible. First note that
000
0
II I
XI XI I
ªºª ºªº
=
«»« »«»
¬¼¬ ¼¬¼
and
0
00 0
IYI Y I
II I
ªºª ºªº
=
«»« »«»
¬¼¬ ¼¬¼
Since the matrices
0 and 0
IIY
XI I
ªºªº
«»«»
¬¼¬¼
are square, they are both invertible by the IMT. Equation (7) may be left multipled by
1
0I
XI
ªº
«»
¬¼
and right multipled by
1
0
IY
I
ªº
«»
¬¼
to find
11
11
00
00
AI IY
A
SXI I
−−
ªºªºªº
=
«»«»«»
¬¼¬¼¬¼
Thus by Theorem 6, the matrix
11
0
0
A
S
ªº
«»
¬¼
is invertible as the product of invertible matrices. Finally,
Exercise 13 above may be used to show that S is invertible.
114 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
18. Since
[]
0
,WX=x
0
0
0000
[ ]
TTT
T
TTT
XXXX
WW X
X
ªº ª º
==
«» « »
«» « »
¬¼ ¬ ¼
x
x
xxxx
By applying the formula for S from Exercise 15, S may be computed:
1
00 0 0
1
00
00
()
(())
TTT T
TTT
m
T
SXXXX
IXXXX
M
=
=
=
xx x x
xx
xx
19. The matrix equation (8) in the text is equivalent to
( ) and
n
AsI B C+= +=xu0 xuy
Rewrite the first equation as
() .
n
AsI B=xu
When
n
AsI
is invertible,
11
()()()
nn
AsI B AsI B
−−
=−−=−−xuu
Substitute this formula for x into the second equation above:
11
(( ) ) sothat ( )
nmn
CAsI B ICAsIB
−−
−− += −− =uuy u uy,
Thus
1
(( )).
mn
ICAsI B
=−−yu
If
1
() ( ) ,
mn
Ws I CA sI B
=−−
then ().Ws=yu
The matrix W(s) is
the Schur complement of the matrix
n
AsI
in the system matrix in equation (8)
20. The matrix in question is
n
m
ABCsI B
CI
−−
ªº
«»
¬¼
By applying the formula for S from Exercise 16, S may be computed:
1
1
()( )
()
mn
mn
SI CABCsI B
ICABCsI B
=−− − −
=+ −−
21. a.
2
2
10 00
1010 10
2121 01
22 0(1)
A
++
ªº
ªºªº ªº
== =
«»
«»«» «»
−−
+
¬¼¬¼ ¬¼
¬¼
b.
2
2
2
00 000 0
0
0( )
AA A I
M
IAIA I
AA A
ªº
++
ªºªº ªº
== =
«»
«»«» «»
−−
+
¬¼¬¼ ¬¼
«»
¬¼
22. Let C be any nonzero 2×2 matrix. Define
2
2
2
00
00
0
I
MI
CI
ª
º
«
»
=
«
»
«
»
¬
¼
. Then
22 2 2
2
22 2 2
22 2 2
00 00 00 00
00000 000
00 000
II I I
MI I I I
CICICCI I
ªºªºª ºªº
«»«»« »«»
===
«»«»« »«»
«»«»« »«»
−−
¬¼¬¼¬ ¼¬¼
2.4 Solutions 115
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
23. The product of two 1×1 “lower triangular” matrices is “lower triangular.” Suppose that for n = k, the
product of two k×k lower triangular matrices is lower triangular, and consider any (k+1)× (k+1)
matrices A
1
and B
1
. Partition these matrices as
11
,
TT
ab
AB
AB
ªº ª º
==
«» « »
¬¼ ¬ ¼
00
vw
where A and B are k×k matrices, v and w are in R
k
, and a and b are scalars. Since A
1
and B
1
are lower
triangular, so are A and B. Then
11
TTT
TT T
T
ab a B
ab ab
AB AB bAAB
bA AB
ªº
ªºª º ªº
++
== =
«»
«»« » «»
+
++
«»
¬¼¬ ¼ ¬¼
¬¼
0w 0 0
00 0
vw vw
vwv0
Since A and B are k×k, AB is lower triangular. The form of A
1
B
1
shows that it, too, is lower
triangular. Thus the statement about lower triangular matrices is true for n = k +1 if it is true for n =
k. By the principle of induction, the statement is true for all n > 1.
Note:
Exercise 23 is good for mathematics and computer science students. The solution of Exercise 23 in
the Study Guide shows students how to use the principle of induction. The Study Guide also has an
appendix on “The Principle of Induction,” at the end of Section 2.4. The text presents more applications
of induction in Section 3.2 and in the Supplementary Exercises for Chapter 3.
24. Let
100 0 1 0 0 0
110 0 1 1 0 0
,
111 0 0 1 1 0
111 1 0 11
nn
AB
ªºª º
«»« »
«»« »
«»« »
==
«»« »
«»« »
«»« »
¬¼¬ ¼
""
#% #%%
""
.
By direct computation A
2
B
2
= I
2
. Assume that for n = k, the matrix A
k
B
k
is I
k
, and write
11
11
and
TT
kk
kk
AB
AB
++
ªº ª º
==
«» « »
¬¼ ¬ ¼
00
vw
where v and w are in R
k
, v
T
= [1 1 1], and w
T
= [–1 0 0]. Then
11 1
1
11 1
TTT
TT T
k
kk k
T
kk k
kkk
B
AB I
AB I
AAB
++ +
ªº
ªºª º ª º
++
== ==
«»
«»« » « »
++
«»
¬¼¬ ¼ ¬ ¼
¬¼
0w 0 0
00 0
vw 0
vwv0
The (2,1)-entry is 0 because v equals the first column of A
k
., and A
k
w is –1 times the first column of
A
k
. By the principle of induction, A
n
B
n
= I
n
for all n > 2. Since A
n
and B
n
are square, the IMT shows
that these matrices are invertible, and
1
.
nn
BA
=
Note:
An induction proof can also be given using partitions with the form shown below. The details are
slightly more complicated.
11
and
11
kk
kk
TT
AB
AB
++
ªº ª º
==
«» « »
¬¼ ¬ ¼
00
vw
missing
2.4 Solutions 117
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
F o r [ j=10, j<=19, j++,
A [[ i,j ]] = B [[ i-4, j-9 ]] ] ]; Colon suppresses
output.
c. To create
0
0
T
A
B
A
ªº
=«»
¬¼
with MATLAB, build B out of four blocks:
B = [ A z e r o s ( 2 0 , 2 0 ) ; z e r o s ( 3 0 , 3 0 ) A ] ;
Another method: first enter B = A ; and then enlarge B with the command
B ( 2 1 : 5 0 , 3 1 : 5 0 ) = A ;
This places A
T
in the (2, 2) block of the larger B and fills in the (1, 2) and (2, 1) blocks with zeros.
For Maple:
B : = m a t r i x ( 5 0 , 5 0 , 0 ) :
c o p y i n t o ( A , B , 1 , 1 ) :
c o p y i n t o ( t r a n s p o s e ( A ) , B , 2 1 , 3 1 ) :
For Mathematica:
B = B l o c k M a t r i x [ { { A , Z e r o M a t r i x [ 2 0 , 2 0 ] } , Z e r o M a t r i x [ 3 0 , 3 0 ] ,
Transpose[A]}} ]
27. a. [M] Construct A from four blocks, say C
11
, C
12
, C
21
, and C
22
, for example with C
11
a 30×30
matrix and C
22
a 20×20 matrix.
MATLAB: C11 = A(1:30, 1:30) + B(1:30, 1:30)
C12 = A(1:30, 31:50) + B(1:30, 31:50)
C21 = A(31:50, 1:30)+ B(31:50, 1:30)
C22 = A(31:50, 31:50) + B(31:50, 31:50)
C = [C11 C12; C21 C22]
The commands in Maple and Mathematica are analogous, but with different syntax. The first
commands are:
Maple:
C11 := submatrix(A, 1..30, 1..30) + submatrix(B, 1..30,
1..30)
Mathematica:
C11 := Take[ A, {1,30), {1,30} ] + Take[B, {1,30), {1,30}
]
b. The algebra needed comes from block matrix multiplication:
11 12 11 12 11 11 12 21 11 12 12 22
21 22 21 22 21 11 22 21 21 12 22 22
AABB ABAB ABAB
AB
AABB ABABABAB
++
ªºªºª º
==
«»«»« »
++
¬¼¬¼¬ ¼
Partition both A and B, for example with 30×30 (1, 1) blocks and 20×20 (2, 2) blocks. The four
necessary submatrix computations use syntax analogous to that shown for (a).
c. The algebra needed comes from the block matrix equation
11 11
21 22 2 2
0A
AA
ª
ºª º ª º
=
«
»« » « »
¬
¼¬ ¼ ¬ ¼
xb
xb
, where x
1
and b
1
are in R
20
and x
2
and b
2
are in R
30
. Then A
1 1
x
1
= b
1
, which can be solved to produce x
1
.
Once x
1
is found, rewrite the equation A
21
x
1
+ A
22
x
2
= b
2
as A
22
x
2
= c, where c = b
2
A
21
x
1
, and
solve A
22
x
2
= c for x
2
.
Notes:
The following may be used in place of Example 5:
2.4 Solutions 118
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Example 5: Use equation (*) to find formulas for X, Y, and Z in terms of A, B, and C. Mention any
assumptions you make in order to produce the formulas.
00 0XI I
YZAB CI
ªºªºªº
=
«»«»«»
¬¼¬¼¬¼
(*)
Solution:
This matrix equation provides four equations that can be used to find X, Y, and Z:
X + 0 = I, 0 = 0
YI + ZA = C, Y0 + ZB = I (Note the order of the factors.)
The first equation says that X = I. To solve the fourth equation, ZB = I, assume that B and Z are
square. In this case, the equation ZB = I implies that B and Z are invertible, by the IMT. (Actually, it
suffices to assume either that B is square or that Z is square.) Then, right-multiply each side of ZB = I
to get ZBB
–1
= IB
–1
and Z = B
–1
. Finally, the third equation is Y + ZA = C. So, Y + B
–1
A = C, and Y =
C B
–1
A.
The following counterexample shows that Z need not be square for the equation (*) above to be true.
100 0
1000 0 1000
0100
0100 0 0100
1125
1213 1 65 10
1113
3410 1 360 1
1124
ªº
ªº ªº
«»
«» «»
«»
«» «»
«»
=
«» «»
«»
−−
«» «»
«»
«» «»
¬¼ ¬¼
«»
¬¼
Note that Z is not determined by A, B, and C, when B is not square. For instance, another Z that
works in this counterexample is
350
120
Zªº
=«»
−−
¬¼
.
2.5 SOLUTIONS
Notes:
Modern algorithms in numerical linear algebra are often described using matrix factorizations.
For practical work, this section is more important than Sections 4.7 and 5.4, even though matrix
factorizations are explained nicely in terms of change of bases. Computational exercises in this section
emphasize the use of the LU factorization to solve linear systems. The LU factorization is performed
using the algorithm explained in the paragraphs before Example 2, and performed in Example 2. The text
discusses how to build L when no interchanges are needed to reduce the given matrix to U. An appendix
in the Study Guide discusses how to build L in permuted unit lower triangular form when row
interchanges are needed. Other factorizations are introduced in Exercises 22–26.
1.
100 37 2 7
110, 02 1, 5.First,solve .
251 001 2
LU L
−− −
ªºª ºªº
«»« »«»
==−− ==
«»« »«»
«»« »«»
−−
¬¼¬ ¼¬¼
byb
1007 1007
[] 1105~0102
2512 05116
L
−−
ªºªº
«»«»
=−−
«»«»
«»«»
−−
¬¼¬¼
b
. The only arithmetic is in column 4
2.5 Solutions 119
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
100 7 7
0 1 0 2 , so 2.
001 6 6
−−
ªºªº
«»«»
∼−=
«»«»
«»«»
¬¼¬¼
y
Next, solve Ux = y, using back-substitution (with matrix notation).
3727 3727 37019
[]02120212020 8
0016 0016 001 6
U
−−− −−− −
ªºªºªº
«»«»«»
=−− −− −−
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
y
37019 3009 1003
~0 1 0 4 0 1 0 4 0 1 0 4
001 6 0016 0016
−−
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
−−−
¬¼¬¼¬¼
, So x =
3
4.
6
ª
º
«
»
«
»
«
»
¬
¼
To confirm this result, row reduce the matrix [A b]:
3727 3727 3727
[]351502120212
64020104160016
A
−−− −−− −−−
ªºªºªº
«»«»«»
= −− −−
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
b
From this point the row reduction follows that of [U y] above, yielding the same result.
2.
100 2 6 4 2
210, 0 4 8, 4
011 0 0 2 6
LU
ªºª ºªº
«»« »«»
===
«»« »«»
«»« »«»
¬¼¬ ¼¬¼
b
. First, solve Ly = b:
100 2 1002
[] 21040100 ,
011 6 0016
L
ªºªº
«»«»
=−−
«»«»
«»«»
¬¼¬¼
b
so
2
0.
6
ª
º
«
»
=
«
»
«
»
¬
¼
y
Next solve Ux = y, using back-substitution (with matrix notation):
2642 26014
[]048004024
0026 0013
U
−−
ªºªº
«»«»
=
«»«»
«»«»
−−
¬¼¬¼
y
200 22 100 11
010 6 010 6,
001 3 001 3
−−
ªºªº
«»«»
−∼
«»«»
«»«»
−−
¬¼¬¼
so
11
6.
3
ª
º
«
»
=
«
»
«
»
¬
¼
x
To confirm this result, row reduce the matrix [A b]:
2642 2642
[]48040480
0466 0026
A
−−
ªºªº
«»«»
=−−
«»«»
«»«»
−−
¬¼¬¼
b
From this point the row reduction follows that of [U y] above, yielding the same result.
3.
100 2 42 6
210, 036, 0
311 001 6
LU
ªºªºªº
«»«»«»
===
«»«»«»
«»«»«»
¬¼¬¼¬¼
b
. First, solve Ly = b:
120 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1006 100 6 1006
[] 21000101201012 ,
3116 01112 0010
L
ªºª ºªº
«»« »«»
=−∼ ∼
«»« »«»
«»« »«»
−−
¬¼¬ ¼¬¼
b
so
6
12 .
0
ª
º
«
»
=
«
»
«
»
¬
¼
y
Next solve Ux = y, using back-substitution (with matrix notation):
2426 2406 20010 1005
[]03612 03012 010 4 0104
0010 0010 001 0 0010
U
−− − −
ªºªºªºªº
«»«»«»«»
=−∼−∼ ∼ −
«»«»«»«»
«»«»«»«»
¬¼¬¼¬¼¬¼
y
,
so
5
4.
0
ªº
«»
=
«»
«»
¬¼
x
4.
100 112 0
110, 021, 5
351 006 7
LU
ªºª ºªº
«»« »«»
==−− =
«»« »«»
«»« »«»
−−
¬¼¬ ¼¬¼
b
. First, solve Ly = b:
1000 1000 100 0
[]11050105010 5 ,
3517051700118
L
ªºªºªº
«»«»«»
=−∼ −∼ −
«»«»«»
«»«»«»
−− −
¬¼¬¼¬¼
b
so
0
5 .
18
ªº
«»
=
«»
«»
¬¼
y
Next solve Ux = y, using back-substitution (with matrix notation):
112 0 1120 1106
[]021 5 0215 0202
00618 0013 0013
U
−−
ªºªºªº
«»«»«»
=−− ∼ −∼ −
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
y
1106 1005
01010101 ,
0013 0013
−− −
ªºªº
«»«»
∼∼
«»«»
«»«»
¬¼¬¼
so x =
5
1
3
ª
º
«
»
«
»
«
»
¬
¼
.
5.
10 00 1 2 2 3 1
31 00 0 3 6 0 6
,, .
10 10 0 0 2 4 0
34 21 0 0 0 1 3
LU
−−−
ªºª ºªº
«»« »«»
«»« »«»
== =
«»« »«»
«»« »«»
−−
¬¼¬ ¼¬¼
b First solve Ly = b:
10 00 1 10 00 1
31 006 01 003
[] 10 100 00 10 1
34 21 3 04 21 6
L
ªºªº
«»«»
«»«»
=
«»«»
«»«»
−−
¬¼¬¼
b
10 00 1 1000 1
01 00 3 0100 3
,
00 10 1 0010 1
00 21 6 0001 4
ªºªº
«»«»
«»«»
∼∼
«»«»
«»«»
−−
¬¼¬¼
so
1
3.
1
4
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
y
Next solve Ux = y, using back-substitution (with matrix notation):
2.5 Solutions 121
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1223 1 122011 1220 11
03603 0360 3 0360 3
[]
00241002017 001017/2
00014 0001 4 0001 4
U
−−− −− − −−
ªºªºªº
«»«»«»
−−
«»«»«»
=∼∼
«»«»«»
«»«»«»
−− −
¬¼¬¼¬¼
y
1200 6 1200 6 1000 38
0300 48 0100 16 0100 16
001017/2 001017/2 001017/2
0001 4 0001 4 0001 4
−−
ªºªºªº
«»«»«»
−−
«»«»«»
∼∼∼∼
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
, so x =
38
16
17 / 2
4
ªº
«»
«»
«»
«»
¬¼
.
6.
1000 1320 1
2100 03012 2
, , .
3310 0020 1
5411 0001 2
LU
ªºªºªº
«»«»«»
−−
«»«»«»
===
«»«»«»
−−
«»«»«»
−−
¬¼¬¼¬¼
b First, solve Ly = b:
1000 1 1000 1 1000 1
21002 01000 01000
[] 3310 1 03104 00104
54112 04117 00117
L
ªºªºªº
«»«»«»
−−
«»«»«»
=∼∼
«»«»«»
−−− −
«»«»«»
−− − −
¬¼¬¼¬¼
b
1000 1
0100 0
,
0010 4
0001 3
ªº
«»
«»
«»
«»
¬¼
so
1
0.
4
3
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
y Next solve Ux = y, using back-substitution (with
matrix notation):
13 2 0 1 13 20 1 1300 3
03 012 0 03 00 36 0300 36
[]
00 2 0 4 00 20 4 0010 2
00 0 1 3 00 01 3 0001 3
U
ªºªºªº
«»«»«»
−−
«»«»«»
=∼∼
«»«»«»
−− − −
«»«»«»
¬¼¬¼¬¼
y
1000 33
0100 12
0010 2
0001 3
ªº
«»
«»
«»
«»
¬¼
, so x
33
12
2
3
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
.
122 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
7. Place the first pivot column of
25
34
ªº
«»
−−
¬¼
into L, after dividing the column by 2 (the pivot), then add
3/2 times row 1 to row 2, yielding U.
252 5
~
34 07/2
AU
ªºªº
==
«»«»
−−
¬¼¬¼
2
3[7/2]
ªº
«»
¬¼
27/2÷÷
110
,
3/2 1 3/2 1
L
ªºªº
=
«»«»
−−
¬¼¬¼
8. Row reduce A to echelon form using only row replacement operations. Then follow the algorithm in
Example 2 to find L.
64 64
12 5 0 3
AU
ªºª º
= =
«»« »
¬¼¬ ¼
[]
6 3
12
ªº
«»
¬¼
63
110
,
21 21
L
÷÷
ªºªº
=
«»«»
¬¼¬¼
9.
31 2 312 312
90 4 032~032
9914 068 004
AU
ªºªºªº
«»«»«»
=−−=
«»«»«»
«»«»«»
¬¼¬¼¬¼
[]
3
93
94
6
ªº
«»
ªº
«» «»
«»
¬¼ ¬¼
÷3 ÷ 3 ÷ 4
2.5 Solutions 123
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
110 0
31 , 310
32 1 32 1
L
ªºªº
«»«»
=
«»«»
«»«»
¬¼¬¼
10.
50 4 50 4 504
10 2 5 ~ 0 2 3 ~ 0 2 3
10 10 16 0 10 24 0 0 9
AU
−−
ªºªºªº
«»«»«»
==
«»«»«»
«»«»«»
¬¼¬¼¬¼
[]
5
10 2
10 9
10
529
1100
21 , 210
251 251
L
ªº
«»
ªº
«»
«»
«»
¬¼
¬¼
÷÷÷
ªºªº
«»«»
=
«»«»
«»«»
−−
¬¼¬¼
11.
372 372 372
6194 050 050
323 055 005
AU
ªºªºªº
«»«»«»
=∼∼=
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
3
65
3
5[5]
355
1100
21 , 210
11 1 111
L
ªº
«»
ªº
«»
«»
«»
¬¼
¬¼
÷÷÷
ªºªº
«»«»
=
«»«»
«»«»
−−
¬¼¬¼
12. Row reduce A to echelon form using only row replacement operations. Then follow the algorithm in
Example 2 to find L. Use the last column of I
3
to make L unit lower triangular.
124 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
232232232
4139 0 7 5 0 7 5
65401410000
2
47
614
27
1100
21 , 210
321 321
AU
L
ªºªºªº
«»«»«»
=∼∼=
«»«»«»
«»«»«»
¬¼¬¼¬¼
ªº
«»
ªº
«»
«»
«»
¬¼
¬¼
÷÷
ªºªº
«»«»
=
«»«»
«»«»
−−
¬¼¬¼
13.
13 53 1 353 1353
1584 0 231 0231 No more pivots!
42 57 010155 0000
2475 0 231 0000
U
−− −− −
ªºªºªº
«»«»«»
−− − −
«»«»«»
∼∼=
«»«»«»
−− −
«»«»«»
−− −−
¬¼¬¼¬¼
4
1
12
410
Use the last two columns of to make unit lower triangular.
22IL
ªº
«»
ªº
«»
«»
«»
«»
«»
«»
¬¼
¬¼
1–2
11000
11 1100
,
451 4510
2101 2101
L
÷÷
ªºªº
«»«»
−−
«»«»
=
«»«»
«»«»
−− −−
«»«»
¬¼¬¼
14.
1315 1315 1315
520 6 31 0 51 6 0516
2114 0516 0000
17 17 010212 0000
AU
ªºªºªº
«»«»«»
«»«»«»
=∼∼=
«»«»«»
−−
«»«»«»
¬¼¬¼¬¼
2.5 Solutions 125
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
4
1
5
5
5
2
Use the last two columns of to make unit lower triangular.
10
1IL
ªº
«»ªº
«»«»
«»
«»
«»«»
¬¼
¬¼
15
11000
51 5100
,
211 2110
1201 120 1
L
÷÷
ªºªº
«»«»
«»«»
=
«»«»
−−
«»«»
−−
¬¼¬¼
15.
20 5 2 205 2 2052
63 13 3 032 3 0323
49 1617 09613 0004
AU
ªºªºªº
«»«»«»
=−− =
«»«»«»
«»«»«»
¬¼¬¼¬¼
[]
2
63
44
9
234
1100
31 , 310
231 23 1
L
ªº
«»
ªº
«»
«»
«»
¬¼
¬¼
÷÷÷
ªºªº
«»«»
=
«»«»
«»«»
¬¼¬¼
16.
23 4 234 234
48 7 021 021
~~
6514 042 000
6912 000 000
8 6 19 0 6 3 0 0 0
AU
−−
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
==
«»«»«»
−−
«»«»«»
«»«»«»
¬¼¬¼¬¼
5
2
42
64
60
86Use the last three columns of to make unit lower triangular.IL
ªº
«»
ªº
«»«»
«»«»
«»«»
«»«»
«»
¬¼¬¼
126 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
22
110000
21 21000
,
321 32100
300 1 30010
43001 43001
L
÷÷
ªºªº
«»«»
−−
«»«»
«»«»
=
«»«»
−−
«»«»
«»«»
¬¼¬¼
17.
100 2 6 4
210, 0 4 8
011 0 0 2
LU
ªºª º
«»« »
==
«»« »
«»« »
¬¼¬ ¼
To find L
–1
, use the method of Section 2.2; that is, row
reduce [L I ]:
1
100 100 100 1 00
[] 210010 010210[ ],
011001 001 2 11
LI I L
ªºªº
«»«»
=− ∼ =
«»«»
«»«»
−−
¬¼¬¼
so
1
100
210
211
L
ªº
«»
=«»
«»
−−
¬¼
. Likewise to find U
–1
, row reduce []UI :
264100 260102
[]048010040014
002001002001
UI
−−
ªºªº
«»«»
=−∼
«»«»
«»«»
−−
¬¼¬¼
1
2001 3/2 4 1001/2 3/4 2
0100 1/4 1 010 0 1/4 1[ ],
0010 0 1/2 001 0 0 1/2
IU
−− −−
ªºªº
«»«»
∼−∼−=
«»«»
«»«»
−−
¬¼¬¼
1
1/ 2 3/ 4 2
so 0 1 / 4 1 . Thus
001/2
U
−−
ªº
«»
=−−
«»
«»
¬¼
111
1/2 3/ 4 2 1 0 0 3 5/4 2
01/4 1210 3/23/4 1
001/2211 11/21/2
AUL
−−
−−
ª
ºª º ª º
«
»« » « »
== −− =
«
»« » « »
«
»« » « »
−−
¬
¼¬ ¼ ¬ ¼
18.
1
100 2 42
210, 036Tofind,rowreduce[]:
311 001
LU LLI
ªºªº
«»«»
==
«»«»
«»«»
¬¼¬¼
[]
100100 100 100
210010~010210
311001 011301
LI
ªºªº
«»«»
=
«»«»
«»«»
−−
¬¼¬¼
2.5 Solutions 127
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1
100 100
~0 1 0 2 1 0 ,
001 111
IL
ªº
«»
ªº
=¬¼
«»
«»
¬¼
1
100
so 2 1 0 .
111
L
ª
º
«
»
=
«
»
«
»
¬
¼
1
Likewise to find ,U
[]
row reduce :UI
[]
242100 240102 2401 02
036010~030016~01001/32
001001 001001 0010 01
UI
−−−−
ªºªºªº
«»«»«»
=−−
«»«»«»
«»«»«»
¬¼¬¼¬¼
2001 4/36 1001/2 2/33
~0 1 0 0 1/3 2~0 1 0 0 1/3 2
0010 01 001 0 0 1
−−
ªºª º
«»« »
−−
«»« »
«»« »
¬¼¬ ¼
1
[],IU
=
1
1/ 2 2/3 3
so 0 1 / 3 2 . Thus
001
U
ªº
«»
=
«»
«»
¬¼
111
1/ 2 2/3 3 1 0 0 23/6 7/3 3
01/32210 8/35/32
001111 111
AUL
−−
−−
ª
ºª º ª º
«
»« » « »
== =
«
»« » « »
«
»« » « »
−−
¬
¼¬ ¼ ¬ ¼
19. Let A be a lower-triangular n × n matrix with nonzero entries on the diagonal, and consider the
augmented matrix [A I].
a. The (1, 1)-entry can be scaled to 1 and the entries below it can be changed to 0 by adding
multiples of row 1 to the rows below. This affects only the first column of A and the first column
of I. So the (2, 2)-entry in the new matrix is still nonzero and now is the only nonzero entry of
row 2 in the first n columns (because A was lower triangular). The (2, 2)-entry can be scaled to
1, the entries below it can be changed to 0 by adding multiples of row 2 to the rows below. This
affects only columns 2 and n + 2 of the augmented matrix. Now the (3, 3) entry in A is the only
nonzero entry of the third row in the first n columns, so it can be scaled to 1 and then used as a
pivot to zero out entries below it. Continuing in this way, A is eventually reduced to I, by scaling
each row with a pivot and then using only row operations that add multiples of the pivot row to
rows below.
b. The row operations just described only add rows to rows below, so the I on the right in [A I]
changes into a lower triangular matrix. By Theorem 7 in Section 2.2, that matrix is A
–1
.
20. Let A
= LU be an LU factorization for A. Since L is unit lower triangular, it is invertible by Exercise
19. Thus by the Invertible Matrix Theroem, L may be row reduced to I. But L is unit lower triangular,
so it can be row reduced to I by adding suitable multiples of a row to the rows below it, beginning
with the top row. Note that all of the described row operations done to L are row-replacement
operations. If elementary matrices E
1
, E
2
, … E
p
implement these row-replacement operations, then
21 21
... ( ... )
pp
EEEAEEELUIUU===
This shows that A may be row reduced to U using only row-replacement operations.
128 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
21. (Solution in Study Guide.) Suppose A = BC, with B invertible. Then there exist elementary matrices
E
1
, …, E
p
corresponding to row operations that reduce B to I, in the sense that E
p
E
1
B = I.
Applying the same sequence of row operations to A amounts to left-multiplying A by the product E
p
E
1
. By associativity of matrix multiplication.
11
... ...
pp
EEAEEBCICC===
so the same sequence of row operations reduces A to C.
22. First find an LU factorization for A. Row reduce A to echelon form using only row replacement
operations:
2423 2423 2423
6958 0311 0311
~~
2739 0316 0005
4221 0627 0005
6334 09313 00010
A
−− −− −−
ªºªºªº
«»«»«»
−− − −
«»«»«»
«»«»«»
=−− −−
«»«»«»
−−− − −
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
2423
0311
~0005
0000
0000
U
−−
ªº
«»
«»
«»
=
«»
«»
«»
¬¼
then follow the algorithm in Example 2 to find L. Use the last two columns of I
5
to make L unit lower
triangular.
2
63
5
23
5
46
10
69
235
110000
31 31000
,
111 11100
2211 22110
33201 33201
L
ªº
«»
ªº
«»
«»
«»ªº
«»
«» «»
«»
«» «»
«»
«» «»
−−
¬¼
¬¼¬¼
÷÷÷
ªºªº
«»«»
«»«»
«»«»
=
−−
«»«»
−−
«»«»
«»«»
−− −−
¬¼¬¼
Now notice that the bottom two rows of U contain only zeros. If one uses the row-column method to
find LU, the entries in the final two columns of L will not be used, since these entries will be
multiplied by zeros from the bottom two rows of U. So let B be the first three columns of L and let C
be the top three rows of U. That is,
2.5 Solutions 129
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
100
2423
310
,0311
111
0005
221
332
BC
ªº
«»
−−
ªº
«»
«»
«»
==
«»
«»
«»
¬¼
«»
«»
−−
¬¼
Then B and C have the desired sizes and BC = LU = A. We can generalize this process to the case
where A is m
× n, A = LU, and U has only three non-zero rows: let B be the first three columns of L
and let C be the top three rows of U.
23. a. Express each row of D as the transpose of a column vector. Then use the multiplication rule for
partitioned matrices to write
[]
1
2
14
1234 1 22 3 4
3
3
4
T
T
T
TTT
T
T
ACD
ªº
«»
«»
== = + + +
«»
«»
«»
«»
¬¼
d
d
cc c c cd cd cd cd
d
d
which is the sum of four outer products.
b. Since A has 400 × 100 = 40000 entries, C has 400 × 4 = 1600 entries and D has 4 × 100 = 400
entries, to store C and D together requires only 2000 entries, which is 5% of the amount of entries
needed to store A directly.
24. Since Q is square and Q
T
Q = I, Q is invertible by the Invertible Matrix Theorem and Q
–1
= Q
T
. Thus
A is the product of invertible matrices and hence is invertible. Thus by Theorem 5, the equation
Ax = b has a unique solution for all b. From Ax = b, we have QRx = b, Q
T
QRx = Q
T
b, Rx = Q
T
b, and
finally x = R
–1
Q
T
b. A good algorithm for finding x is to compute Q
T
b and then row reduce the matrix
[ R Q
T
b ]. See Exercise 11 in Section 2.2 for details on why this process works. The reduction is fast
in this case because R is a triangular matrix.
25. A = UDV
T
. Since U and V
T
are square, the equations U
T
U = I and V
T
V = I imply that U and V
T
are invertible, by the IMT, and hence U
–1
= U
T
and (V
T
)
–1
= V. Since the diagonal entries
1,,
n
σσ
!
in D are nonzero, D is invertible, with the inverse of D being the diagonal matrix with
11
1
,,
n
σσ
−−
!
on
the diagonal. Thus A is a product of invertible matrices. By Theorem 6, A is invertible and A
–1
=
(UDV
T
)
–1
= (V
T
)
–1
D
–1
U
–1
= VD
–1
U
T
.
26. If A = PDP
–1
, where P is an invertible 3 × 3 matrix and D is the diagonal matrix
200
030
001
D
ªº
«»
=«»
«»
¬¼
then
211 11 121
()()()APDPPDP PDPPDPPDIDPPDP
−− − −
====
and since
2
200200 400
030030 090
001001 001
D
ªºªºªº
«»«»«»
==
«»«»«»
«»«»«»
¬¼¬¼¬¼
,
21
400
090
001
AP P
ªº
«»
=«»
«»
¬¼
130 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Likewise, A
3
= PD
3
P
–1
, so
3
331 1
3
200 800
03 0 0270
001
001
AP P P P
−−
ªº
ªº
«»
«»
==
«»
«»
«»
«»
¬¼
«»
¬¼
In general, A
k
= PD
k
P
–1
, so
1
200
03 0
001
k
kk
AP P
ªº
«»
=«»
«»
«»
¬¼
27. First consider using a series circuit with resistance R
1
followed by a shunt circuit with resistance R
2
for the network. The transfer matrix for this network is
1
1
22122
10 11
1/ 1 1/ ( )/01
RR
R
RRRR
ªº ª º
ªº
=
«» « »«»
−−+
¬¼¬¼ ¬ ¼
For an input of 12 volts and 6 amps to produce an output of 9 volts and 4 amps, the transfer matrix
must satisfy
11
2122 122
112 612 9
1/ ( )/ ( 12 6 6 )/64
RR
RRRR RRR
−−
ªºªº
ªº ªº
==
«»«»«» «»
+++
¬¼ ¬¼¬¼¬¼
Equate the top entries and obtain
1
12
ohm.R=
Substitute this value in the bottom entry and solve to
obtain
9
22
ohms.R=
The ladder network is
Next consider using a shunt circuit with resistance R
1
followed by a series circuit with resistance R
2
for the network. The transfer matrix for this network is
121 2
2
11
10( )/1
1/ 1 1/ 101
R
RR RR
RR
+ªºª º
ªº
=
«»« »«»
−−
¬¼¬ ¼¬ ¼
For an input of 12 volts and 6 amps to produce an output of 9 volts and 4 amps, the transfer matrix
must satisfy
121 2 1212
11
()/ (12 1 2 ) / 612 9
1/ 1 12 / 664
RRR R R RR R
RR
++
ªºª º
ªº ªº
==
«»« »«» «»
−−+
¬¼ ¬¼¬¼¬ ¼
Equate the bottom entries and obtain R
1
= 6 ohms. Substitute this value in the top entry and solve to
obtain
3
24
ohms.R=
The ladder network is
a.
i
2
i
1
i
2
i
3
v
3
v
2
v
1
1/2 ohm 9/ 2
ohms
2.5 Solutions 131
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
28. The three shunt circuits have transfer matrices
312
10
10 10
,,and
1/ 1
1/ 1 1/ 1 R
RR
ªºªºªº
«»
«»«»
−−
¬¼¬¼
¬¼
respectively. To find the transfer matrix for the series of circuits, multiply these matrices
312 321
10 10
10 10
1/ 1 (1 / 1 / 1 / ) 11/ 1 1/ 1RRR RRR
ªº ª ºªºªº
=
«» « »
«»«»
−−+ +−−
¬¼¬¼¬¼ ¬ ¼
Thus the resulting network is itself a shunt circuit with resistance
123
12 13 23
RRR
R
RRRRR++
29. a. The first circuit is a series circuit with resistance R
1
ohms, so its transfer matrix is
1
1
01
R
ªº
«»
¬¼
.
The second circuit is a shunt circuit with resistance R
2
ohms, so its transfer matrix is
2
10
.
1/ 1R
ªº
«»
¬¼
The third circuit is a series circuit with resistance R
3
ohms so its transfer matrix is
3
1
01
R
ªº
«»
¬¼
.The transfer matrix of the network is the product of these matrices, in right-to-left
order:
32 13 132
31
2212
10 1/ /
11
1/ 1 1/ 1 /
01 01
R
RRRRRR
RR
RRRR
+−−−
−−
ªº ª º
ªº ªº
==
«» « »
«» «»
−−+
¬¼ ¬¼
¬¼ ¬ ¼
b. To find a ladder network with a structure like that in part (a) and with the given transfer matrix A,
we must find resistances R
1
, R
2
, and R
3
such that
32 13 132
212
1/ /
312
1/ 1 /
1/3 5/3
R
RRRRRR
ARRR
+−−−
ªº
ªº
==
«»
«»
+
¬¼
¬¼
From the (2, 1) entries, R
2
= 3 ohms. The (1, 1) entries now give
32
1/3,RR+=
which may be
solved to obtain R
3
= 6 ohms. Likewise the (2, 2) entries gives
12
1/5/3,RR+=
which also may
be solved to obtain R
1
= 2 ohms. Thus the matrix A may be factored as
31
2
10
11
1/ 1
01 01
R
R
AR
−−
ªº
ªº ªº
=«»
«» «»
¬¼ ¬¼
¬¼
16 1 012
01 1/3101
−−
ªºª ºªº
=«»« »«»
¬¼¬ ¼¬¼
30. Answers may vary. For example,
31216101012
1/3 5/3 0 1 1/6 1 1/6 1 0 1
−− −
ªºªºªºªºªº
=
«»«»«»«»«»
−−
¬¼¬¼¬¼¬¼¬¼
b. i
2
i
1
i
2
i
3
3/4 ohm
v
3
v
2
v
1
6
ohms
132 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
The network corresponding to this factorization consists of a series circuit, followed by two shunts,
followed by another series circuit. The resistances would be R
1
=2, R
2
=6, R
3
=6, R
4
=6.
Note:
The Study Guide’s MATLAB box for Section 2.5 suggests that for most LU factorizations in this
section, students can use the gauss command repeatedly to produce U, and use paper and mental
arithmetic to write down the columns of L as the row reduction to U proceeds. This is because for
Exercises 7–16 the pivots are integers and other entries are simple fractions. However, for Exercises 31
and 32 this is not reasonable, and students are expected to solve an elementary programming problem.
(The Study Guide provides no hints.)
31. [M] Store the matrix A in a temporary matrix B and create L initially as the 8×8 identity matrix. The
following sequence of MATLAB commands fills in the entries of L below the diagonal, one column
at a time, until the first seven columns are filled. (The eighth column is the final column of the
identity matrix.)
L(2:8, 1) = B(2:8, 1)/B(1, 1)
B = gauss(B, 1)
L(3:8, 2) = B(3:8, 2)/B(2, 2)
B = gauss(B, 2)
#
L(8:8, 7) = B(8:8, 7)/B(7, 7)
U = gauss(B,7)
Of course, some students may realize that a loop will speed up the process. The for..end syntax
is illustrated in the MATLAB box for Section 5.6. Here is a MATLAB program that includes the
initial setup of B and L:
B = A
L = eye(8)
for j=1:7
L(j+1:8, j) = B(j+1:8, j)/B(j, j)
B = gauss(B, j)
end
U = B
a. To four decimal places, the results of the LU decomposition are
10 0 0 0 0 00
.25 1 0 0 0 0 0 0
.25 .0667 1 0 0 0 0 0
0.2667.28571 0 0 00
00.2679.08331 0 00
00 0.2917.29211 00
00 0 0.2697.086110
00 0 0 0.2948.29311
L
ªº
«»
«»
«»
−−
«»
−−
«»
=«»
−−
«»
−−
«»
«»
−−
«»
−−
«»
¬¼
2.5 Solutions 133
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
41 1 0 0 0 0 0
03.75 .25 1 0 0 0 0
003.73331.0667 1 0 0 0
00 0 3.4286 .2857 1 0 0
00 0 0 3.70831.0833 1 0
00 0 0 0 3.3919 .2921 1
00 0 0 0 0 3.70521.0861
00 0 0 0 0 0 3.3868
U
−−
ªº
«»
−−
«»
«»
−−
«»
−−
«»
=«»
−−
«»
−−
«»
«»
«»
«»
¬¼
b. The result of solving Ly = b and then Ux = y is
x = (27.1292, 19.2344, 29.2823, 19.8086, 30.1914, 20.7177, 30.7656, 22.8708)
c.
1
.2953 .0866 .0945 .0509 .0318 .0227 .0010 .0082
.0866 .2953 .0509 .0945 .0227 .0318 .0082 .0100
.0945 .0509 .3271 .1093 .1045 .0591 .0318 .0227
.0509 .0945 .1093 .3271 .0591 .1045 .0227 .0318
.0318 .0227 .1045 .0591 .3271 .1093 .0945 .
A
=
0509
.0227 .0318 .0591 .1045 .1093 .3271 .0509 .0945
.0010 .0082 .0318 .0227 .0945 .0509 .2953 .0866
.0082 .0100 .0227 .0318 .0509 .0945 .0866 .2953
ªº
«»
«»
«»
«»
«»
«»
«»
«»
«»
«»
«»
¬¼
32. [M]
3100
1310
0131
0013
A
ªº
«»
−−
«»
=«»
−−
«»
¬¼
. The commands shown for Exercise 31, but modified for 4×4 matrices,
produce
1
3
3
8
8
21
10 00
100
010
00 1
L
ªº
«»
«»
=«»
«»
«»
¬¼
8
3
21
8
55
21
3100
010
00 1
00 0
U
ªº
«»
«»
=«»
«»
«»
¬¼
2.4 Solutions 134
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b. Let s
k+1
be the solution of Ls
k+1
= t
k
for k = 0, 1, 2, …. Then t
k+1
is the solution of Ut
k+1
= s
k+1
for k = 0, 1, 2, …. The results are
1122
10.0000 7.0000 7.0000 5.0000
18.3333 11.0000 13.3333 8.0000
,, ,,
21.8750 11.0000 16.0000 8.0000
18.3333 7.0000 13.0952 5.0000
ªºªºªºªº
«»«»«»«»
«»«»«»«»
====
«»«»«»«»
«»«»«»«»
¬¼¬¼¬¼¬¼
stst
3344
5.0000 3.6000 3.6000 2.6000
9.6667 5.8000 7.0000 4.2000
,,,.
11.6250 5.8000 8.4250 4.2000
9.4286 3.6000 6.8095 2.6000
ªºªºªºªº
«»«»«»«»
«»«»«»«»
====
«»«»«»«»
«»«»«»«»
¬¼¬¼¬¼¬¼
stst
2.6 SOLUTIONS
Notes:
This section is independent of Section 1.10. The material here makes a good backdrop for the
series expansion of (IC)
–1
because this formula is actually used in some practical economic work.
Exercise 8 gives an interpretation to entries of an inverse matrix that could be stated without the economic
context.
1. The answer to this exercise will depend upon the order in which the student chooses to list the
sectors. The important fact to remember is that each column is the unit consumption vector for the
appropriate sector. If we order the sectors manufacturing, agriculture, and services, then the
consumption matrix is
.10 .60 .60
.30 .20 0
.30 .10 .10
C
ªº
«»
=«»
«»
¬¼
The intermediate demands created by the production vector x are given by Cx. Thus in this case the
intermediate demand is
.10 .60 .60 0 60
.30 .20 .00 100 20
.30 .10 .10 0 10
C
ªºªºªº
«»«»«»
==
«»«»«»
«»«»«»
¬¼¬¼¬¼
x
2. Solve the equation x = Cx + d for d:
111 2 3
221 2
331 2 3
.10 .60 .60 .9 .6 .6 0
.30 .20 .00 .3 .8 20
.30 .10 .10 .3 .1 .9 0
xxxxx
Cx x x x
xxxxx
−−
ªº ªºª ºªº ªº
«» «»« »«» «»
===+=
«» «»« »«» «»
«» «»« »«» «»
−−+
¬¼ ¬¼¬¼ ¬¼¬ ¼
dx x
This system of equations has the augmented matrix
.90 .60 .60 0 1 0 0 37.03
.30 .80 .00 20 ~ 0 1 0 38.89
.30 .10 .90 0 0 0 1 16.67
−−
ªºªº
«»«»
«»«»
«»«»
−−
¬¼¬¼
, so x =
37.03
38.89
16.67
ª
º
«
»
«
»
«
»
¬
¼
.
2.6 Solutions 135
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
3. Solving as in Exercise 2:
111 2 3
221 2
331 2 3
.10 .60 .60 .9 .6 .6 20
.30 .20 .00 .3 .8 0
.30 .10 .10 .3 .1 .9 0
xxxxx
xxxx
xxxxx
−−
ªº ªºª ºªº ªº
«» «»« »«» «»
===+=
«» «»« »«» «»
«» «»« »«» «»
−−+
¬¼ ¬¼¬¼ ¬¼¬ ¼
dx xC
This system of equations has the augmented matrix
.90 .60 .60 20 1 0 0 44.44
.30 .80 .00 0 ~ 0 1 0 16.67
.30 .10 .90 0 0 0 1 16.67
−−
ªºªº
«»«»
«»«»
«»«»
−−
¬¼¬¼
, so x =
44.44
16.67
16.67
ª
º
«
»
«
»
«
»
¬
¼
.
4. Solving as in Exercise 2:
111 2 3
221 2
331 2 3
.10 .60 .60 .9 .6 .6 20
.30 .20 .00 .3 .8 20
.30 .10 .10 .3 .1 .9 0
xxxxx
Cx x x x
xxxxx
−−
ªº ªºª ºªº ªº
«» «»« »«» «»
===+=
«» «»« »«» «»
«» «»« »«» «»
−−+
¬¼ ¬¼¬¼ ¬¼¬ ¼
dx x
This system of equations has the augmented matrix
.90 .60 .60 20 1 0 0 81.48
.30 .80 .00 20 ~ 0 1 0 55.56
.30 .10 .90 0 0 0 1 33.33
−−
ªºªº
«»«»
«»«»
«»«»
−−
¬¼¬¼
, so x =
81.48
55.56
33.33
ª
º
«
»
«
»
«
»
¬
¼
.
Note:
Exercises 2–4 may be used by students to discover the linearity of the Leontief model.
5.
1
1
1.5 50 1.6150 110
() .6 .8 30 1.2 2 30 120
IC
ªºªºªºªºªº
====
«»«»«»«»«»
¬¼¬¼¬¼¬¼¬¼
xd
6.
1
1
.8 .5 16 .9 .5 16 48.57
1
() .6 .9 12 .6 .8 12 45.71
(.72 .30)
IC
ªºªº ªºªºªº
=== =
«»«» «»«»«»
¬¼¬¼ ¬¼¬¼¬¼
xd
7. a. From Exercise 5,
11.6 1
()1.2 2
IC
ª
º
=
«
»
¬
¼
so
1
11
1.6 1 1 1.6
() 1.2 2 0 1.2
IC
ªºªºªº
===
«»«»«»
¬¼¬¼¬¼
xd
which is the first column of
1
().IC
b.
1
22
1.6 1 51 111.6
() 1.2 2 30 121.2
IC
ªºªºªº
===
«»«»«»
¬¼¬¼¬¼
xd
c. From Exercise 5, the production x corressponding to
50 110
is .
30 120
ª
ºªº
==
«
»«»
¬
¼¬¼
dx
Note that
21
.=+ddd
Thus
136 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1
22
1
1
11
1
1
()
()( )
() ()
IC
IC
IC IC
−−
=
=+
=+
=+
xd
dd
dd
xx
8. a. Given () and() ,IC IC==xd x d∆∆
()( )()()IC IC IC+=+=+xx x xdd∆∆
Thus +xx is the production level corresponding to a demand of .+dd
b. Since
1
()IC
=xd∆∆
and d is the first column of I, x will be the first column of
1
()IC
.
9. In this case
.8 .2 .0
.3 .9 .3
.1 .0 .8
IC
ªº
«»
=−−
«»
«»
¬¼
Row reduce
[]ICd to find
.8 .2 .0 40.0 1 0 0 82.8
.3 .9 .3 60.0 ~ 0 1 0 131.0
.1 .0 .8 80.0 0 0 1 110.3
ªºªº
«»«»
−−
«»«»
«»«»
¬¼¬¼
So x = (82.8, 131.0, 110.3).
10. From Exercise 8, the (i, j) entry in (I C)
–1
corresponds to the effect on production of sector i when
the final demand for the output of sector j increases by one unit. Since these entries are all positive,
an increase in the final demand for any sector will cause the production of all sectors to increase.
Thus an increase in the demand for any sector will lead to an increase in the demand for all sectors.
11. (Solution in study Guide) Following the hint in the text, compute p
T
x in two ways. First, take the
transpose of both sides of the price equation, p = C
T
p + v, to obtain
(v)()
TT TTTTT T
CC C=+= +=+pp pvpv
and right-multiply by x to get
()
TTT T T
CC=+= +px p v x p x v x
Another way to compute p
T
x starts with the production equation x = Cx + d. Left multiply by p
T
to
get
()
TT T T
CC=+=+px p x d p x pd
The two expression for p
T
x show that
TTTT
CC+= +pxvxpxpd
so v
T
x = p
T
d. The Study Guide also provides a slightly different solution.
2.6 Solutions 137
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
12. Since
21
1
... (...)
mm
m m
DICC C ICIC CICD
+
+
=+ + + + =+ + + + =+
1
m
D+
may be found iteratively by
1
.
mm
DICD
+=+
13. [M] The matrix IC is
0.8412 0.0064 0.0025 0.0304 0.0014 0.0083 0.1594
0.0057 0.7355 0.0436 0.0099 0.0083 0.0201 0.3413
0.0264 0.1506 0.6443 0.0139 0.0142 0.0070 0.0236
0.3299 0.0565 0.0495 0.6364 0.0204 0.0483 0.0649
0.0089
−−−− −
−−
−− −− − −
−−− − −−
−−0.0081 0.0333 0.0295 0.6588 0.0237 0.0020
0.1190 0.0901 0.0996 0.1260 0.1722 0.7632 0.3369
0.0063 0.0126 0.0196 0.0098 0.0064 0.0132 0.9988
ªº
«»
«»
«»
«»
«»
«»
−− − −
«»
−−− −
«»
«»
−− − −− −
¬¼
so the augmented matrix []ICd may be row reduced to find
0.8412 0.0064 0.0025 0.0304 0.0014 0.0083 0.1594 74000
0.0057 0.7355 0.0436 0.0099 0.0083 0.0201 0.3413 56000
0.0264 0.1506 0.6443 0.0139 0.0142 0.0070 0.0236 10500
0.3299 0.0565 0.0495 0.6364 0.0204 0.0483
−−−− −
−−
−− −− − −
−−− − − 0.0649 25000
0.0089 0.0081 0.0333 0.0295 0.6588 0.0237 0.0020 17500
0.1190 0.0901 0.0996 0.1260 0.1722 0.7632 0.3369 196000
0.0063 0.0126 0.0196 0.0098 0.0064 0.0132 0.9988 5000
ª º
« »
« »
« »
« »
« »
« »
−−− −
« »
−−− −
« »
« »
−− − −− −
¬ ¼
1000000 99576
0100000 97703
0010000 51231
~0001000131570
0000100 49488
0000010329554
0000001 13835
ªº
«»
«»
«»
«»
«»
«»
«»
«»
«»
¬¼
so x = (99576, 97703, 51231, 131570, 49488, 329554, 13835). Since the entries in d seem to be
accurate to the nearest thousand, a more realistic answer would be x = (100000, 98000, 51000,
132000, 49000, 330000, 14000).
14. [M] The augmented matrix []ICd in this case may be row reduced to find
0.8412 0.0064 0.0025 0.0304 0.0014 0.0083 0.1594 99640
0.0057 0.7355 0.0436 0.0099 0.0083 0.0201 0.3413 75548
0.0264 0.1506 0.6443 0.0139 0.0142 0.0070 0.0236 14444
0.3299 0.0565 0.0495 0.6364 0.0204 0.0483
−−−− −
−−
−− −− − −
−−− − − 0.0649 33501
0.0089 0.0081 0.0333 0.0295 0.6588 0.0237 0.0020 23527
0.1190 0.0901 0.0996 0.1260 0.1722 0.7632 0.3369 263985
0.0063 0.0126 0.0196 0.0098 0.0064 0.0132 0.9988 6526
ª º
« »
« »
« »
« »
« »
« »
−−− −
« »
−−− −
« »
« »
−− − −− −
¬ ¼
138 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1000000134034
0100000131687
0010000 69472
~0001000176912
0000100 66596
0000010443773
0000001 18431
ªº
«»
«»
«»
«»
«»
«»
«»
«»
«»
¬¼
so x = (134034, 131687, 69472, 176912, 66596, 443773, 18431). To the nearest thousand, x =
(134000, 132000, 69000, 177000, 67000, 444000, 18000).
15. [M] Here are the iterations rounded to the nearest tenth:
(0)
(1)
(2)
(3)
(74000.0, 56000.0, 10500.0, 25000.0, 17500.0, 196000.0, 5000.0)
(89344.2, 77730.5, 26708.1, 72334.7, 30325.6, 265158.2, 9327.8)
(94681.2, 87714.5, 37577.3, 100520.5, 38598.0, 296563.8, 11480.0)
(97091.
=
=
=
=
x
x
x
x
(4)
(5)
(6)
9, 92573.1, 43867.8, 115457.0, 43491.0, 312319.0, 12598.8)
(98291.6, 95033.2, 47314.5, 123202.5, 46247.0, 320502.4, 13185.5)
(98907.2, 96305.3, 49160.6, 127213.7, 47756.4, 324796.1, 13493.8)
(99226.6, 96969.
=
=
=
x
x
x
(7)
(8)
(9)
6, 50139.6, 129296.7, 48569.3, 327053.8, 13655.9)
(99393.1, 97317.8, 50656.4, 130381.6, 49002.8, 328240.9, 13741.1)
(99480.0, 97500.7, 50928.7, 130948.0, 49232.5, 328864.7, 13785.9)
(99525.5, 97596.8, 51071.
=
=
=
x
x
x
(10)
(11)
(12)
9, 131244.1, 49353.8, 329192.3, 13809.4)
(99549.4, 97647.2, 51147.2, 131399.2, 49417.7, 329364.4, 13821.7)
(99561.9, 97673.7, 51186.8, 131480.4, 49451.3, 329454.7, 13828.2)
(99568.4, 97687.6, 51207.5, 131
=
=
=
x
x
x523.0, 49469.0, 329502.1, 13831.6)
so x
(12)
is the first vector whose entries are accurate to the nearest thousand. The calculation of x
(12)
takes about 1260 flops, while the row reduction above takes about 550 flops. If C is larger than
20 20,×
then fewer flops are required to compute x
(12)
by iteration than by row reduction. The
advantage of the iterative method increases with the size of C. The matrix C also becomes more
sparse for larger models, so fewer iterations are needed for good accuracy.
2.7 SOLUTIONS
Notes:
The content of this section seems to have universal appeal with students. It also provides practice
with composition of linear transformations. The case study for Chapter 2 concerns computer graphics
see this case study (available as a project on the website) for more examples of computer graphics in
action. The Study Guide encourages the student to examine the book by Foley referenced in the text. This
section could form the beginning of an independent study on computer graphics with an interested
student.
2.7 Solutions 139
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1. Refer to Example 5. The representation in homogenous coordinates can be written as a partitioned
matrix of the form ,
1
T
A
ªº
«»
¬¼
0
0
where A is the matrix of the linear transformation. Since in this case
1.25
,
01
Aªº
=«»
¬¼
the representation of the transformation with respect to homogenous coordinates is
1.250
010
001
ªº
«»
«»
«»
¬¼
Note:
The Study Guide shows the student why the action of 1
T
A
ª
º
«
»
¬
¼
0
0
on the vector
x
1
ªº
«»
¬¼
corresponds to
the action of A on x.
2. The matrix of the transformation is
10
01
A
ª
º
=
«
»
¬
¼
, so the transformed data matrix is
10425 4 2 5
01023 0 2 3
AD
−−− −
ªºª ºª º
==
«»« »« »
¬¼¬ ¼¬ ¼
3. Following Examples 4–6,
010102 011
100011 102
001001 001
−−
ªºªºª º
«»«»« »
=
«»«»« »
«»«»« »
¬¼¬¼¬ ¼
4.
1/ 2 0 0 1 0 1 1/ 2 0 1/ 2
03/2001 4 03/2 6
001001 00 1
−−
ªºªºª º
«»«»« »
=
«»«»« »
«»«»« »
¬¼¬¼¬ ¼
5.
2/2 2/2 0 2/2 2/2 0
100
2/2 2/2 0 0 1 0 2/2 2/2 0
001001001
ªºªº
ªº
«»«»
«»
=
«»«»
«»
«»«»
«»
¬¼
«»«»
¬¼¬¼
6.
2/2 2/2 0 2/2 2/2 0
100
0102/2 2/20 2/2 2/20
001 0 01 0 01
ªºªº
−−
ªº
«»«»
«»
=− −
«»«»
«»
«»«»
«»
¬¼
«»«»
¬¼¬¼
7. A 60° rotation about the origin is given in homogeneous coordinates by the matrix
1/ 2 3 / 2 0
3/2 1/2 0
001
ªº
«»
«»
«»
«»
¬¼
. To rotate about the point (6, 8), first translate by (–6, –8), then rotate about
140 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
the origin, then translate back by (6, 8) (see the Practice Problem in this section). A 60° rotation
about (6, 8) is thus is given in homogeneous coordinates by the matrix
1/ 2 3 / 2 0 1/ 2 3 / 2 3 4 3
106 10 6
018 3/2 1/2001 8 3/2 1/2433
001 0 0100 1 0 0 1
ªºª º
−−+
ªº ªº
«»« »
«» «»
=
«»« »
«» «»
«»« »
«» «»
¬¼ ¬¼
«»« »
¬¼¬ ¼
8. A 45° rotation about the origin is given in homogeneous coordinates by the matrix
2/2 2/2 0
2/2 2/2 0
001
ªº
«»
«»
«»
«»
¬¼
. To rotate about the point (3, 7), first translate by (–3, –7), then rotate about
the origin, then translate back by (3, 7) (see the Practice Problem in this section). A 45° rotation
about (3, 7) is thus is given in homogeneous coordinates by the matrix
2/2 2/2 0 2/2 2/2 3 2 2
103 10 3
017 2/2 2/2001 7 2/2 2/2752
001 0 0100 1 0 0 1
ªºª º
−−+
ªº ªº
«»« »
«» «»
=
«»« »
«» «»
«»« »
«» «»
¬¼ ¬¼
«»« »
¬¼¬ ¼
9. To produce each entry in BD two multiplications are necessary. Since BD is a
2100×
matrix, it will
take
22100 400×× =
multiplications to compute BD. By the same reasoning it will take
22100×× =
400 multiplications to compute A(BD). Thus to compute A(BD) from the beginning will take
400 + 400 = 800 multiplications.
To compute the 22× matrix AB it will take
2228××=
multiplications, and to compute
(AB)D it will take
22100 400×× =
multiplications. Thus to compute (AB)D from the beginning will
take 8 + 400 = 408 multiplications.
For computer graphics calculations that require applying multiple transformations to data
matrices, it is thus more efficient to compute the product of the transformation matrices before
applying the result to the data matrix.
10. Let the transformation matrices in homogeneous coordinates for the dilation, rotation, and translation
be called respectively D, and R, and T. Then for some value of s,
ϕ
, h, and k,
00 cos sin 0 10
00,sincos0,01
001 0 0 1 001
sh
DsR T k
ϕϕ
ϕϕ
ªºª ºªº
«»« »«»
== =
«»« »«»
«»« »«»
¬¼¬ ¼¬¼
Compute the products of these matrices:
cos sin 0 cos sin 0
sin cos 0 , sin cos 0
001 001
ss ss
DR s s RD s s
ϕϕ ϕϕ
ϕϕ ϕϕ
−−
ªºªº
«»«»
==
«»«»
«»«»
¬¼¬¼
00
0,0
00 1 001
ssh sh
DT s sk TD s k
ªºªº
«»«»
==
«»«»
«»«»
¬¼¬¼
2.7 Solutions 141
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
cos sin cos sin cos sin
sin cos sin cos , sin cos
00 1 001
hk h
R
Thk TRk
ϕϕ ϕϕ ϕϕ
ϕϕ ϕ ϕ ϕϕ
−− −
ªºª º
«»« »
=+=
«»« »
«»« »
¬¼¬ ¼
Since DR = RD, DT TD and RT TR, D and R commute, D and T do not commute and R and T do
not commute.
11. To simplify A
2
A
1
completely, the following trigonometric identities will be needed:
1.
sin
cos
tan cos cos sin
ϕ
ϕ
ϕϕ ϕ ϕ
==
2.
22
sin 1sin cos
1
cos cos cos cos
sec tan sin sin cos
ϕϕϕ
ϕϕ ϕ ϕ
ϕϕϕ ϕ ϕ
====
Using these identities,
21
sec tan 0 1 0 0
010sincos0
001001
AA
ϕϕ
ϕϕ
ªºªº
«»«»
=
«»«»
«»«»
¬¼¬¼
sec tan sin tan cos 0
sin cos 0
001
ϕϕϕ ϕϕ
ϕϕ
−−
ªº
«»
=
«»
«»
¬¼
cos sin 0
sin cos 0
001
ϕϕ
ϕϕ
ªº
«»
=
«»
«»
¬¼
which is the transformation matrix in homogeneous coordinates for a rotation in
2
.
12. To simplify this product completely, the following trigonometric identity will be needed:
1cos sin
tan / 2 sin 1 cos
ϕϕ
ϕ
ϕϕ
==
+
This identity has two important consequences:
1cos
1(tan/2)(sin)1 sin cos
sin
ϕ
ϕϕ ϕϕ
ϕ
==
sin
(cos )( tan / 2) tan / 2 (cos 1) tan / 2 (cos 1) sin
1cos
ϕ
ϕϕ ϕ ϕ ϕ ϕ ϕ
ϕ
−−=+=+=
+
The product may be computed and simplified using these results:
1tan/201001tan/20
010sin10010
001001001
ϕϕ
ϕ
−−
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
1(tan/2)(sin) tan/2 0 1 tan/2 0
sin 10010
001001
ϕϕ ϕ ϕ
ϕ
−−
ªºª º
«»« »
=
«»« »
«»« »
¬¼¬ ¼
142 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
cos tan / 2 0 1 tan / 2 0
sin 1 0 0 1 0
001001
ϕϕ ϕ
ϕ
−−
ªºªº
«»«»
=
«»«»
«»«»
¬¼¬¼
cos (cos )( tan / 2) tan / 2 0
sin (sin )(tan / 2) 1 0
001
ϕϕϕ ϕ
ϕϕϕ
−−
ªº
«»
=+
«»
«»
¬¼
cos sin 0
sin cos 0
001
ϕϕ
ϕϕ
ªº
«»
=
«»
«»
¬¼
which is the transformation matrix in homogeneous coordinates for a rotation in
2
.
13. Consider first applying the linear transformation on
2
whose matrix is A, then applying a translation
by the vector p to the result. The matrix representation in homogeneous coordinates of the linear
transformation is ,
1
T
A
ªº
«»
¬¼
0
0
while the matrix representation in homogeneous coordinates of the
translation is .
1
T
I
ªº
«»
¬¼
p
0
Applying these transformations in order leads to a transformation whose
matrix representation in homogeneous coordinates is
11 1
TT T
IA A
ªºªºªº
=
«»«»«»
¬¼¬¼¬¼
p0 p
00 0
which is the desired matrix.
14. The matrix for the transformation in Exercise 7 was found to be
1/ 2 3 / 2 3 4 3
3/2 1/2 4 3 3
00 1
ªº
+
«»
«»
«»
«»
¬¼
This matrix is of the form ,
1
T
A
ªº
«»
¬¼
p
0
where
1/ 2 3 / 2 343
,
3/2 1/2 433
A
ªº
ªº
+
==
«»
«»
«»
«»
¬¼
¬¼
p
By Exercise 13, this matrix may be written as
11
TT
IA
ªºªº
«»«»
¬¼¬¼
p0
00
that is, the composition of a linear transformation on
2
and a translation. The matrix A is the matrix
of a rotation of 60 about the origin in
2
. Thus the transformation in Exercise 7 is the composition of
a rotation about the origin and a translation by
343
.
433
ª
º
+
=
«
»
«
»
¬
¼
p
2.7 Solutions 143
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
15. Since
1111
24824
(,,, )(, , ,),XYZH=−−
the corresponding point in
3
has coordinates
1
11
8
24
11 1
24 24 24
(, ,) , , , , (12, 6, 3)
XY Z
xyz HHH
§·
−−
§·
== =−−
¨¸
¨¸
¨¸
©¹
©¹
16. The homogeneous coordinates (1, –2, 3, 4) represent the point
(1 / 4, 2 / 4, 3 / 4) (1 / 4, 1 / 2, 3 / 4)−−=−−
while the homogeneous coordinates (10, –20, 30, 40) represent the point
(10 / 40, 20 / 40, 30 / 40) (1 / 4, 1 / 2, 3 / 4)−−=−−
so the two sets of homogeneous coordinates represent the same point in
3
.
17. Follow Example 7a by first constructing that
33×
matrix for this rotation. The vector e
1
is not
changed by this rotation. The vector e
2
is rotated 60° toward the positive z-axis, ending up at the
point (0, cos 60°, sin 60°) =
(0, 1/ 2, 3 / 2).
The vector e
3
is rotated 60° toward the negative y-axis,
stopping at the point
(0, cos 150°, sin 150°) =
(0, 3 / 2, 1/ 2).
The matrix A for this rotation is thus
10 0
01/2 3/2
03/2 1/2
A
ªº
«»
=
«»
«»
¬¼
so in homogeneous coordinates the transformation is represented by the matrix
10 00
01/2 3/20
103/2 1/20
00 01
T
A
ªº
«»
ªº
«»
=
«»
«»
¬¼
«»
«»
¬¼
0
0
18. First construct the
33×
matrix for the rotation. The vector e
1
is rotated 30° toward the negative y-
axis, ending up at the point (cos(–30)°, sin (–30)°, 0) =
(3/2,1/2,0).
The vector e
2
is rotated 30°
toward the positive x-axis, ending up at the point (cos 60°, sin 60°, 0) =
(1 / 2, 3 / 2, 0).
The vector e
3
is not changed by the rotation. The matrix A for the rotation is thus
3/2 1/2 0
1/ 2 3 / 2 0
001
A
ªº
«»
=
«»
«»
«»
¬¼
so in homogeneous coordinates the rotation is represented by the matrix
3/2 1/2 0 0
1/2 3 / 2 0 0
10010
0001
T
A
ªº
«»
ªº
«»
=
«»
«»
¬¼
«»
«»
¬¼
0
0
Following Example 7b, in homogeneous coordinates the translation by the vector (5, –2, 1) is
represented by the matrix
144 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
100 5
010 2
001 1
000 1
ªº
«»
«»
«»
«»
«»
¬¼
Thus the complete transformation is represented in homogeneous coordinates by the matrix
100 5 3/2 1/200 3/2 1/20 5
010 2 1/ 2 3 / 2 0 0 1/ 2 3 / 2 0 2
001 1 0010 0011
000 1 0001 0001
ª
ºª º
ªº
«
»« »
«»
−−
«
»« »
«» =
«
»« »
«»
«
»« »
«»
«»
«
»« »
¬¼
¬
¼¬ ¼
19. Referring to the material preceding Example 8 in the text, we find that the matrix P that performs a
perspective projection with center of projection (0, 0, 10) is
10 00
01 00
00 00
00 .11
ªº
«»
«»
«»
«»
«»
¬¼
The homogeneous coordinates of the vertices of the triangle may be written as (4.2, 1.2, 4, 1), (6, 4,
2, 1), and (2, 2, 6, 1), so the data matrix for S is
4.2 6 2
1.2 4 2
426
111
ªº
«»
«»
«»
«»
«»
¬¼
and the data matrix for the transformed triangle is
10 004.262 4.2 6 2
01 001.242 1.2 4 2
00 00 426 0 0 0
00 .11 111 .6.8.4
ªºªºªº
«»«»«»
«»«»«»
=
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
Finally, the columns of this matrix may be converted from homogeneous coordinates by dividing by
the final coordinate:
(4.2, 1.2, 0, .6) (4.2 / .6, 1.2 / .6, 0 / .6) (7, 2, 0)
(6, 4, 0, .8) (6/.8, 4/.8, 0/.8) = (7.5, 5, 0)
(2, 2, 0, .4) (2 / .4, 2 / .4, 0 / .4) (5, 5, 0)
=
=
So the coordinates of the vertices of the transformed triangle are (7, 2, 0), (7.5, 5, 0), and (5, 5, 0).
20. As in the previous exercise, the matrix P that performs the perspective projection is
10 00
01 00
00 00
00 .11
ªº
«»
«»
«»
«»
«»
¬¼
 s 3OLUTIONS 
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
The homogeneous coordinates of the vertices of the triangle may be written as (7, 3, –5, 1), (12, 8, 2,
1), and (1, 2, 1, 1), so the data matrix for S is
712 1
382
521
111
ªº
«»
«»
«»
«»
¬¼
and the data matrix for the transformed triangle is
10 00 712 1 712 1
01 00 3 82 3 82
00 00 5 21 0 0 0
00 .11 1 11 1.5 .8.9
ªºªºªº
«»«»«»
«»«»«»
=
«»«»«»
«»«»«»
¬¼¬¼¬¼
Finally, the columns of this matrix may be converted from homogeneous coordinates by dividing by
the final coordinate:
(7, 3, 0, 1.5) (7 / 1.5,3 / 1.5,0 / 1.5) (4.67, 2,0)
(12, 8, 0, .8) (12 / .8, 8 / .8, 0 / .8) (15,10, 0)
(1, 2, 0, .9) (1 / .9, 2 / .9, 0 / .9) (1.11, 2.22, 0)
=
=
=
So the coordinates of the vertices of the transformed triangle are (4.67, 2, 0), (15, 10, 0),
and (1.11, 2.22, 0).
21. [M] Solve the given equation for the vector (R, G, B), giving
1
.61 .29 .15 2.2586 1.0395 .3473
.35 .59 .063 1.3495 2.3441 .0696
.04 .12 .787 .0910 .3046 1.2777
RXX
GY Y
BZ Z
−−
ªº ª ºªºª ºª º
«» « »«»« »« »
==
«» « »«»« »« »
«» « »«»« »« »
¬¼ ¬ ¼¬¼¬ ¼¬ ¼
22. [M] Solve the given equation for the vector (R, G, B), giving
1
.299 .587 .114 1.0031 .9548 .6179
.596 .275 .321 .9968 .2707 .6448
.212 .528 .311 1.0085 1.1105 1.6996
RYY
GI I
BQ Q
ªº ª ºªº ª ºª º
«» « »«» « »« »
=−− =−−
«» « »«» « »« »
«» « »«» « »« »
−−
¬¼ ¬ ¼¬¼ ¬ ¼¬ ¼
2.8 SOLUTIONS
Notes
: Cover this section only if you plan to skip most or all of Chapter 4. This section and the next
cover everything you need from Sections 4.1–4.6 to discuss the topics in Section 4.9 and Chapters 5–7
(except for the general inner product spaces in Sections 6.7 and 6.8). Students may use Section 4.2 for
review, particularly the Table near the end of the section. (The final subsection on linear transformations
should be omitted.) Example 6 and the associated exercises are critical for work with eigenspaces in
Chapters 5 and 7. Exercises 31–36 review the Invertible Matrix Theorem. New statements will be added
to this theorem in Section 2.9.
Key Exercises: 5–20 and 23–26.
146 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1. The set is closed under sums but not under multiplication
by a negative scalar. A counterexample to the subspace
condition is shown at the right.
Note
: Most students prefer to give a geometric counterexample, but some may choose an algebraic calcu-
lation. The four exercises here should help students develop an understanding of subspaces, but they may
be insufficient if you want students to be able to analyze an unfamiliar set on an exam. Developing that
skill seems more appropriate for classes covering Sections 4.1–4.6.
2.The set is closed under scalar multiples but not sums.
For example, the sum of the vectors in H shown
here is not in H.
3. No. The set is not closed under sums or scalar multiples. See the diagram.
4. No. The set is closed under sums, but not under multiplication by a
negative scalar.
u
(–1)u
2.8 Solutions 147
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
5. The vector w is in the subspace generated by v
1
and v
2
if and only if the vector equation x
1
v
1
+ x
2
v
2
= w is consistent. The row operations below show that w is in the subspace generated by v
1
and v
2
.
12
123 12 3 123
[]~333~036~036
4710 012 000
−− −− −−
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
vvw
6. The vector u is in the subspace generated by {v
1
, v
2
, v
3
} if and only if the vector equation x
1
v
1
+ x
2
v
2
+ x
3
v
3
= u is consistent. The row operations below show that u is not in the subspace generated by
{v
1
, v
2
, v
3
}.
123
145 1 14 5 1 145 1
3437 081210 012 1
[]~ ~ ~
2561 03 4 1 081210
3752 0510 5 034 1
−−
ª
ºª ºª º
«
»« »« »
−−−− − −
«
»« »« »
«
»« »« »
−−
«
»« »« »
−− −−
¬
¼¬ ¼¬ ¼
vv vu
14 5 1 14 5 1 145 1
01 2 1 01 2 1 012 1
00 4 2 00 1 1 001 1
00 2 2 00 4 2 000 6
−−
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
−− − −
«»«»«»
−−− −
¬¼¬¼¬¼

Note
: For a quiz, you could use w = (9, –7, 11, 12), which is in Span{v
1
, v
2
, v
3
}.
7. a. There are three vectors: v
1
, v
2
, and v
3
in the set {v
1
, v
2
, v
3
}.
b. There are infinitely many vectors in Span{v
1
, v
2
, v
3
} = Col A.
c. Deciding whether p is in Col A requires calculation:
234 6 23 46 23 46
[]~88610~041014~041014
677 11 02 57 00 00
A
−− − − − −
ªºªºªº
«»«»«»
−−−−
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
p
The equation Ax = p has a solution, so p is in Col A.
8.
2206 2206 2206
[]0351~0351~0351
63517 0351 0000
A
−− − −− − −− −
ªºªºªº
«»«»«»
=−−−
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
p
Yes, the augmented matrix [A p] corresponds to a consistent system, so p is in Col A.
9. To determine whether p is in Nul A, simply compute Ap. Using A and p as in Exercise 7,
Ap =
234 6 2
88610 62.
67711 29
−− −
ªºªºªº
«»«»«»
−−=
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
Since Ap 0, p is not in Nul A.
148 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
10. To determine whether u is in Nul A, simply compute Au. Using A as in Exercise 7 and u = (–5, 5, 3),
Au =
2205 0
0355 0.
6353 0
−− −
ªºªºªº
«»«»«»
=
«»«»«»
«»«»«»
¬¼¬¼¬¼
Yes, u is in Nul A.
11. p = 4 and q = 3. Nul A is a subspace of R
4
because solutions of Ax = 0 must have 4 entries, to match
the columns of A. Col A is a subspace of R
3
because each column vector has 3 entries.
12. p = 3 and q = 5. Nul A is a subspace of R
3
because solutions of Ax = 0 must have 3 entries, to match
the columns of A. Col A is a subspace of R
5
because each column vector has 5 entries.
13. To produce a vector in Col A, select any column of A. For Nul A, solve the equation Ax = 0. (Include
an augmented column of zeros, to avoid errors.)
32150 32150 32150
94170~02480~02480
92510 048160 00000
−−
ªºªºªº
«»«»«»
−− − −
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
321 50 10 1 10
~0 1 2 4 0~0 1 2 4 0,
000 00 00 0 00
−−
ªºª º
«»« »
−−
«»« »
«»« »
¬¼¬ ¼
134
234
0
240
00
xxx
xxx
+=
+=
=
The general solution is x
1
= x
3
x
4
, and x
2
= –2x
3
+ 4x
4
, with x
3
and x
4
free. The general solution in
parametric vector form is not needed. All that is required here is one nonzero vector. So choose any
values for x
3
and x
4
(not both zero). For instance, set x
3
= 1 and x
4
= 0 to obtain the vector (1, –2, 1,
0) in Nul A.
Note
: Section 2.8 of Study Guide introduces the ref command (or rref, depending on the technol-
ogy), which produces the reduced echelon form of a matrix. This will greatly speed up homework for
students who have a matrix program available.
14. To produce a vector in Col A, select any column of A. For Nul A, solve the equation Ax = 0:
12 3 0 10 1/30
1230 12 30
015/30 015/30
4570 0350
00 0 0
~~ ~
00 0 0
5100 09150
00 0 0 00 0 0
27110 0 350
00 0 0 00 0 0
3340 0350
ªº
ª
º
ªºªº
«»
«
»
«»«»
−− «»
«
»
«»«»
«»
«
»
«»«»
−−
«»
«
»
«»«»
«»
«
»
«»«»
«»
«
»
«»«»
−−
¬¼¬¼
¬
¼
¬¼
The general solution is x
1
= (1/3)x
3
and x
2
= (–5/3) x
3
, with x
3
free. The general solution in parametric
vector form is not needed. All that is required here is one nonzero vector. So choose any nonzero
value of x
3
. For instance, set x
3
= 3 to obtain the vector (1, –5, 3) in Nul A.
15. Yes. Let A be the matrix whose columns are the vectors given. Then A is invertible because its
determinant is nonzero, and so its columns form a basis for R
2
, by the Invertible Matrix Theorem (or
by Example 5). (Other reasons for the invertibility of A could be given.)
16. No. One vector is a multiple of the other, so they are linearly dependent and hence cannot be a basis
for any subspace.
2.8 Solutions 149
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
17. Yes. Place the three vectors into a 3×3 matrix A and determine whether A is invertible:
05 6 24 2
00 3~ 05 6
24 2 00 3
A
ªºªº
«»«»
=«»«»
«»«»
¬¼¬¼
The matrix A has three pivots, so A is invertible by the IMT and its columns form a basis for R
3
(as
pointed out in Example 5).
18. No. Place the three vectors into a 3×3 matrix A and determine whether A is invertible:
135 13 5 13 5
111~044~044
324 01111 000
A
ªºªºªº
«»«»«»
=−−
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
The matrix A has two pivots, so A is not invertible by the IMT and its columns do not form a basis
for R
3
(as pointed out in Example 5).
19. No. The vectors cannot be a basis for R
3
because they only span a plan in R
3
. Or, point out that the
columns of the matrix
36
82
15
ª
º
«
»
«
»
«
»
¬
¼
cannot possibly span R
3
because the matrix cannot have a pivot in
every row. So the columns are not a basis for R
3
.
Note:
The Study Guide warns students not to say that the two vectors here are a basis for R
2
.
20. No. The vectors are linearly dependent because there are more vectors in the set than entries in each
vector. (Theorem 8 in Section 1.7.) So the vectors cannot be a basis for any subspace.
21. a. False. See the definition at the beginning of the section. The critical phrases “for each” are
missing.
b. True. See the paragraph before Example 4.
c. False. See Theorem 12. The null space is a subspace of R
n
, not R
m
.
d. True. See Example 5.
e. True. See the first part of the solution of Example 8.
22. a. False. See the definition at the beginning of the section. The condition about the zero vector is
only one of the conditions for a subspace.
b. False. See the warning that follows Theorem 13.
c. True. See Example 3.
d. False. Since y need not be in H, it is not gauranteed by the definition of a subspace that x+y will
be in H.
e. False. See the paragraph after Example 4.
150 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
23. (Solution in Study Guide)
459 2 126 5
65112~0 15 6
348 3 000 0
A
−−
ªºªº
«»«»
=
«»«»
«»«»
¬¼¬¼
. The echelon form identifies
columns 1 and 2 as the pivot columns. A basis for Col A uses columns 1 and 2 of A:
45
6, 5
34
ªºªº
«»«»
«»«»
«»«»
¬¼¬¼
. This
is not the only choice, but it is the “standard” choice. A wrong choice is to select columns 1 and 2 of
the echelon form. These columns have zero in the third entry and could not possibly generate the
columns displayed in A.
For Nul A, obtain the reduced (and augmented) echelon form for Ax = 0:
10 4 70
01 5 60
00 0 00
ªº
«»
«»
«»
¬¼
. This corresponds to:
134
234
470
560
00
xxx
xxx
+=
+=
=
.
Solve for the basic variables and write the solution of Ax = 0 in parametric vector form:
134
234
34
33
44
47 47
56 5 6
10
01
xxx
xxx
xx
xx
xx
ªºª º ªº ª º
«»« » «» «»
+
«»« » «» «»
==+
«»« » «» «»
«»« » «» «»
«» «»
«»« » ¬¼ ¬¼
¬¼¬ ¼
. Basis for Nul A:
47
56
,
10
01
ª
ºª º
«
»« »
«
»« »
«
»« »
«
»« »
«
»« »
¬
¼¬ ¼
Notes
: (1) A basis is a set of vectors. For simplicity, the answers here and in the text list the vectors
without enclosing the list inside set brackets. This style is also easier for students. I am careful,
however, to distinguish between a matrix and the set or list whose elements are the columns of the
matrix.
(2) Recall from Chapter 1 that students are encouraged to use the augmented matrix when solving Ax
= 0, to avoid the common error of misinterpreting the reduced echelon form of A as itself the augmented
matrix for a nonhomogeneous system.
(3) Because the concept of a basis is just being introduced, I insist that my students write the
parametric vector form of the solution of Ax = 0. They see how the basis vectors span the solution space
and are obviously linearly independent. A shortcut, which some instructors might introduce later in the
course, is only to solve for the basic variables and to produce each basis vector one at a time. Namely, set
all free variables equal to zero except for one free variable, and set that variable equal to a suitable
nonzero number.
24.
3690 1254
2472~0036
3666 0000
A
−−
ªºªº
«»«»
=
«»«»
«»«»
−−
¬¼¬¼
. Basis for Col A:
39
2,7
36
ª
ºªº
«
»«»
«
»«»
«
»«»
¬
¼¬¼
.
For Nul A, obtain the reduced (and augmented) echelon form for Ax = 0:
12060
00120
00000
−−
ªº
«»
«»
«»
¬¼
. This corresponds to:
12 4
34
260
20
00
xx x
xx
−−=
+=
=
.
Solve for the basic variables and write the solution of Ax = 0 in parametric vector form:
2.8 Solutions 151
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
124
22
24
34
44
26 26
10
202
01
xxx
xx
xx
xx
xx
ªºª º ªº ª º
«»« » «» « »
«»« » «» « »
==+
«»« » «» « »
«»« » «» « »
¬¼ ¬ ¼
¬¼¬ ¼
. Basis for Nul A:
26
10
,
02
01
ª
ºª º
«
»« »
«
»« »
«
»« »
«
»« »
¬
¼¬ ¼
25.
148 3 7 1480 5
127 3 4 0250 1
~
229 5 5 0001 4
369 5 2 0000 0
A
−−
ªºªº
«»«»
−−
«»«»
=«»«»
«»«»
−−
«»«»
¬¼¬¼
. Basis for Col A:
14 3
12 3
,,
22 5
36 5
ª
ºª ºª º
«
»« »« »
«
»« »« »
«
»« »« »
«
»« »« »
«
»« »« »
¬
¼¬ ¼¬ ¼
.
For Nul A, obtain the reduced (and augmented) echelon form for Ax = 0:
10 20 7 0
012.50 .50
[]~
00 01 40
00 00 00
A
ªº
«»
«»
«»
«»
«»
¬¼
0
.
135
23 5
45
270
2.5 .5 0
40
00
xxx
xx x
xx
+=
+=
+=
=
.
135
235
3533
45
55
27 27
2.5 .5 2.5 .5
Thesolution of 0in parametric vector form : .
10
404
01
xxx
xxx
Axxxx
xx
xx
ªºª º
ª
ºªº
«»« »
«
»«»
+
«»« »
«
»«»
«»« »
«
»«»
===+
«»« »
«
»«»
«»« »
«
»«»
«»« »
«
»«»
¬
¼¬¼
¬¼¬ ¼
x
uv
Basis for Nul A: {u, v}.
Note
: The solution above illustrates how students could write a solution on an exam, when time is
precious, namely, describe the basis by giving names to appropriate vectors found in the calculations.
26.
313 18 31 306
313 02 02 604
~
039 14 00 012
639 26 00 000
A
−− − −
ªºªº
«»«»
«»«»
=
«»«»
−−
«»«»
¬¼¬¼
. Basis for Col A:
311
310
,,
031
632
−−
ªºª ºª º
«»« »« »
«»« »« »
«»« »« »
«»« »« »
¬¼¬ ¼¬ ¼
.
For Nul A,
[]
10004/30
0130 20
~0001 20
0000 00
A
ªº
«»
«»
«»
«»
¬¼
0
.
15
23 5
45
4/3 0
320
20
00
xx
xx x
xx
+=
+=
=
=
The solution of Ax = 0 in parametric vector form:
152 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
15
235
3335
45
55
4/3 04/3
32 32
10
202
01
xx
xxx
xxxx
xx
xx
ªºª º ªº ª º
«»« » «» « »
+
«»« » «» « »
«»« » «» « »
==+
«»« » «» « »
«»« » «» « »
«»« » «» « »
¬¼ ¬ ¼
¬¼¬ ¼
u v
. Basis for Nul A: {u, v}.
27. Construct a nonzero 3×3 matrix A and construct b to be almost any convenient linear combination of
the columns of A.
28. The easiest construction is to write a 3×3 matrix in echelon form that has only 2 pivots, and let b be
any vector in R
3
whose third entry is nonzero.
29. (Solution in Study Guide) A simple construction is to write any nonzero 3×3 matrix whose columns
are obviously linearly dependent, and then make b a vector of weights from a linear dependence
relation among the columns. For instance, if the first two columns of A are equal, then b could be (1,
–1, 0).
30. Since Col A is the set of all linear combinations of a
1
, … , a
p
, the set {a
1
, … , a
p
} spans Col A.
Because {a
1
, … , a
p
} is also linearly independent, it is a basis for Col A. (There is no need to discuss
pivot columns and Theorem 13, though a proof could be given using this information.)
31. If Col F R
5
, then the columns of F do not span R
5
. Since F is square, the IMT shows that F is not
invertible and the equation Fx = 0 has a nontrivial solution. That is, Nul F contains a nonzero vector.
Another way to describe this is to write Nul F {0}.
32. If Col B = R
7
, then the columns of B span R
7
. Since B is square, the IMT shows that B is invertible
and the equation Bx = b has a solution for each b in R
7
. Also, each solution is unique, by Theorem 5
in Section 2.2.
33. If Nul C = {0}, then the equation Cx = 0 has only the trivial solution. Since C is square, the IMT
shows that C is invertible and the equation Cx = b has a solution for each b in R
6
. Also, each solution
is unique, by Theorem 5 in Section 2.2.
34. If the columns of A form a basis, they are linearly independent. This means that A cannot have more
columns than rows. Since the columns also span R
m
, A must have a pivot in each row, which means
that A cannot have more rows than columns. As a result, A must be a square matrix.
35. If Nul B contains nonzero vectors, then the equation Bx = 0 has nontrivial solutions. Since B is
square, the IMT shows that B is not invertible and the columns of B do not span R
5
. So Col B is a
subspace of R
5
, but Col B R
5
.
36. If the columns of C are linearly independent, then the equation Cx = 0 has only the trivial (zero)
solution. That is, Nul C = {0}.
37. [M] Use the command that produces the reduced echelon form in one step (ref or rref depending
on the program). See the Section 2.8 in the Study Guide for details. By Theorem 13, the pivot
columns of A form a basis for Col A.
2.8 Solutions 153
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
3501 3 102.54.53.5
794911011.52.51.5
~
5725 7 00 0 0 0
3734 0 00 0 0 0
A
−− −
ªºªº
«»«»
−−− −
«»«»
=
«»«»
−− −
«»«»
−−
«»«»
¬¼¬¼
Basis for Col A:
35
79
,
57
37
ªºªº
«»«»
«»«»
«»«»
«»«»
«»«»
¬¼¬¼
For Nul A, obtain the solution of Ax = 0 in parametric vector form:
1345
2345
2.5 4.5 3.5 0
1.5 2.5 1.5 0
xxxx
xxxx
++=
++=
Solution:
1345
2345
34 5
2.5 4.5 3.5
1.5 2.5 1.5
,,and are free
xxxx
xxxx
xx x
=+
°=+
®
°
¯
1345
2345
345
33
44
55
2.5 4.5 3.5 2.5 4.5 3.5
1.5 2.5 1.5 1.5 2.5 1.5
100
010
001
xxxx
xxxx
xxx
xx
xx
xx
+−−
ªºª º
ª
ºªºªº
«»« »
«
»«»«»
+−−
«»« »
«
»«»«»
«»« »
«
»«»«»
== = + +
«»« »
«
»«»«»
«»« »
«
»«»«»
«»« »
«
»«»«»
¬
¼¬¼¬¼
¬¼¬ ¼
x = x
3
u + x
4
v + x
5
w
By the argument in Example 6, a basis for Nul A is {u, v, w}.
38. [M]
53268 10100
41387 01100
~
514519 00010
75285 00001
A
−−
ªºªº
«»«»
−− −
«»«»
=«»«»
«»«»
−−−
¬¼¬¼
.
The pivot columns of A form a basis for Col A:
53 68
41 87
,, ,
51 519
75 85
−−
ª
ºª ºª ºª º
«
»« »« »« »
−−
«
»« »« »« »
«
»« »« »« »
«
»« »« »« »
−−
¬
¼¬ ¼¬ ¼¬ ¼
.
For Nul A, solve Ax = 0:
13
23
4
5
0
0
0
0
xx
xx
x
x
+=
=
=
=
Solution:
13
23
3
4
5
is free
0
0
xx
xx
x
x
x
=
°=
°
°
°
=
®
°=
°
°
°
¯
154 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1
2
33
4
5
1
1
1
0
0
x
x
xx
x
x
ªº ªº
«» «»
«» «»
«» «»
==
«» «»
«» «»
«» «»
¬¼
¬¼
x = x
3
u
By the method of Example 6, a basis for Nul A is {u}.
Note
: The Study Guide for Section 2.8 gives directions for students to construct a review sheet for the
concept of a subspace and the two main types of subspaces, Col A and Nul A, and a review sheet for the
concept of a basis. I encourage you to consider making this an assignment for your class.
2.9 SOLUTIONS
Notes
: This section contains the ideas from Sections 4.4–4.6 that are needed for later work in Chapters
5–7. If you have time, you can enrich the geometric content of “coordinate systems” by discussing crystal
lattices (Example 3 and Exercises 35 and 36 in Section 4.4.) Some students might profit from reading
Examples 1–3 from Section 4.4 and Examples 2, 4, and 5 from Section 4.6. Section 4.5 is probably not a
good reference for students who have not considered general vector spaces.
Coordinate vectors are important mainly to give an intuitive and geometric feeling for the
isomorphism between a k-dimensional subspace and R
k
. If you plan to omit Sections 5.4, 5.6, 5.7 and 7.2,
you can safely omit Exercises 1–8 here.
Exercises 1–16 may be assigned after students have read as far as Example 2. Exercises 19 and 20 use
the Rank Theorem, but they can also be assigned before the Rank Theorem is discussed.
The Rank Theorem in this section omits the nontrivial fact about Row A which is included in the
Rank Theorem of Section 4.6, but that is used only in Section 7.4. The row space itself can be introduced
in Section 6.2, for use in Chapter 6 and Section 7.4.
Exercises 9–16 include important review of techniques taught in Section 2.8 (and in Sections 1.2 and
2.5). They make good test questions because they require little arithmetic. My students need the practice
here. Nearly every time I teach the course and start Chapter 5, I find that at least one or two students
cannot find a basis for a two-dimensional eigenspace!
1. If [x]
B
= 3
2
ªº
«»
¬¼
, then x is formed from b
1
and b
2
using
weights 3 and 2:
x = 3b
1
+ 2b
2
= 127
32
111
ª
ºªºªº
+=
«
»«»«»
¬
¼¬¼¬¼
2. If [x]
B
= 1
2
ªº
«»
¬¼
, then x is formed from b
1
and b
2
using weights –1 and 2:
x = (–1)b
1
+ 2b
2
= 339
(1) 2
123
ªº ªºªº
+=
«» «»«»
¬¼ ¬¼¬¼
b
2
2b
2
b
1
2b
1
3b
1
x
x
1
x
2
Copyright © 2012 Pea
r
3. To find c
1
and c
2
that satisfy x = c
1
b
12
210 1
[] ~
357 0
ªºª
=«»«
¬¼¬
bb x
suggested by Exercise 1 and solve
u
[x]
B
=
1
2
1.
2
c
c
ªºªº
=
«»«»
¬¼
¬¼
4. As in Exercise 3,
12
1
[]
5
ª
=«
¬
bb x
and [x]
B
=
1
2
3.
2
c
c
ªºªº
=
«»«»
¬¼
¬¼
5.
12
122 1
[]479~0
357 0
ªºª
«»«
=
«»«
«»«
−−
¬¼¬
bb x
6.
12
375 1
[]230~0
452 0
ªºª
«»«
=
«»«
«»«
−−
¬¼¬
bb x
7. Fig. 1 suggests that w = 2b
1
b
2
an
[w]
B
= 2
1
ªº
«»
¬¼
and [x]
B
= 1.5
.5
ªº
«»
¬¼
. To
c
12
3
1
1.5 .5 1.5 .5
0
2
ª
ºª
+= +
«
»«
¬
¼¬
bb
Figure 1
Note
: Figures 1 and 2 display what Sec
t
8. Fig. 2 suggests that x = b
1
+ b
2
, y =
[x]
B
= 1
1
ªº
«»
¬¼
, [y]
B
= 1/3
1
ªº
«»
¬¼
, and [z]
B
b
2
b
1
x
w
0
2.9 Sol
u
r
son Education, Inc. Publishing as Addison-Wesley.
b
1
+ c
2
b
2
, row reduce the augmented matrix:
1/2 0 1 0 1
~
7/2 7 0 1 2
ºª º
»« »
¼¬ ¼
. Or, one can write a matrix eq
u
sing the matrix inverse. In either case,
21 1 2 1 10 3
~~
39 0 714 0 1 2
−− −
ºª ºª º
»« »« »
−−
¼¬ ¼¬ ¼
,
22 104
11~011
11 000
ºª º
»« »
»« »
»« »
−−
¼¬ ¼
. [x]
B
=
1
2
4.
1
c
c
ªºªº
=
«»«»
¬¼
¬¼
15103
12~012
13 / 3 26 / 3 0 0 0
ºª º
»« »
»« »
»« »
¼¬ ¼
, [x]
B
=
1
2
c
c
ªº
ª
=
«»«
¬
¬¼
n
d x = 1.5b
1
+ .5b
2
, in which case,
c
onfirm [x]
B
, compute
1
4
2
1
ºªº
==
»«»
¼¬¼x
Figure 2
t
ion 4.4 calls B-graph paper.
1/3b
1
- b
2
, and z = –4/3 b
1
+2b
2
. If so, then
= 4/3
2
ªº
«»
¬¼
. To confirm [y]
B
and [z]
B
, compute
u
tions 155
uation as
3.
2
ª
º
»
¬
¼
156 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
12
02 2
(1 / 3) (1 / 3) 32 1
ªº ªº ª º
===
«» «» « »
¬¼ ¬¼ ¬ ¼
bb y
and
12
024
(4/3) 2 4/3 2
320
ªº ªº ª º
+=+==
«» «» « »
¬¼ ¬¼ ¬ ¼
bb z.
9. The information
132 6 133 2
39 1 5 005 7
~
26 19 000 5
515 0 14 00 0 0
A
ª
ºª º
«
»« »
«
»« »
=
«
»« »
«
»« »
¬
¼¬ ¼
is enough to see that columns 1, 3, and 4
of A form a basis for Col A:
12 6
315
,,.
219
5014
ªºª ºª º
«»« »« »
«»« »« »
«»« »« »
«»« »« »
¬¼¬ ¼¬ ¼
Columns 1, 2 and 4, of the echelon form certainly cannot span Col A since those vectors all have zero
in their fourth entries. For Nul A, use the reduced echelon form, augmented with a zero column to
insure that the equation Ax = 0 is kept in mind:
13 0 0 0
00 1 0 0
00 0 1 0
00 0 0 0
ªº
«»
«»
«»
«»
¬¼
.
12
3
4
2
30
0
0
is the free variable
xx
x
x
x
+=
=
=
, x =
12
22
2
3
4
33
1.
00
00
xx
xx
x
x
x
−−
ªº
ª
ºªº
«»
«
»«»
«»
«
»«»
==
«»
«
»«»
«»
«
»«»
¬
¼¬¼
¬¼
So
3
1
0
0
ªº
«»
«»
«»
«»
¬¼
is
a basis for Nul A. From this information, dim Col A = 3 (because A has three pivot columns) and dim
Nul A = 1 (because the equation Ax = 0 has only one free variable).
10. The information
12 154 12120
21156 01103
~
20216 00010
31415 00001
A
−− −
ª
ºª º
«
»« »
«
»« »
=
«
»« »
−−
«
»« »
¬
¼¬ ¼
shows that columns 1, 2,4
and 5 of A form a basis for Col A:
1254
2156
,,,
2016
3115
ªºªºªºªº
«»«»«»«»
«»«»«»«»
«»«»«»«»
−−
«»«»«»«»
¬¼¬¼¬¼¬¼
. For Nul A,
[]
10 1 000
011 000
~000 100
000 010
A
ªº
«»
«»
«»
«»
¬¼
0
.
13
23
4
5
3
0
0
0
0
is a free variable
xx
xx
x
x
x
+=
+=
=
=
13
23
33
3
4
5
1
1
.
1
00
00
xx
xx
xx
x
x
x
−−
ªºªº ªº
«»«» «»
−−
«»«» «»
«»«» «»
== =
«»«» «»
«»«» «»
«»«» «»
¬¼ ¬¼
¬¼
x Basis for Nul A:
1
1
1
0
0
ª
º
«
»
«
»
«
»
«
»
«
»
«
»
¬
¼
.
2.9 Solutions 157
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
From this, dim Col A = 4 and dim Nul A = 1.
11. The information
2452 3 12514
3683 5 00505
~
0090 9 00000
367310 00000
A
−− −
ª
ºª º
«
»« »
−−
«
»« »
=
«
»« »
«
»« »
−−− −−
¬
¼¬ ¼
shows that columns 1and
3 of A form a basis for Col A:
25
38
,.
09
37
ªºªº
«»«»
«»«»
«»«»
«»«»
−−
¬¼¬¼
For Nul A,
[]
120110
001010
~.
000000
000000
A
ªº
«»
«»
«»
«»
¬¼
0
12 45
35
24 5
20
0
, and are free variables
xx xx
xx
xx x
+++=
+=
1245
22
35245
44
55
2211
100
.
001
010
001
xxxx
xx
xxxxx
xx
xx
−−−−
ªºª º ªº ªº ªº
«»« » «» «» «»
«»« » «» «» «»
«»« » «» «» «»
== =++
«»« » «» «» «»
«»« » «» «» «»
«»« » «» «» «»
¬¼ ¬¼ ¬¼
¬¼¬ ¼
x Basis for Nul A:
211
100
,,.
001
010
00 1
−−
ª
ºª ºª º
«
»« »« »
«
»« »« »
«
»« »« »
«
»« »« »
«
»« »« »
«
»« »« »
¬
¼¬ ¼¬ ¼
From this, dim Col A = 2 and dim Nul A = 3.
12. The information
12 4 4 6 1 284 6
51 9 210 0 234 1
~
46 91215 0 050 5
34 5 8 9 0 000 0
A
−−
ª
ºª º
«
»« »
−−
«
»« »
=
«
»« »
−−
«
»« »
¬
¼¬ ¼
shows that columns 1, 2,
and 3 of A form a basis for Col A:
12 4
51 9
,, .
46 9
34 5
ªºªºª º
«»«»« »
«»«»« »
«»«»« »
«»«»« »
¬¼¬¼¬ ¼
For Nul A
[]
1000 00
0102 10
~.
0010 10
0000 00
A
ªº
«»
«»
«»
«»
¬¼
0
1
245
35
45
0
20
0
and are free variables
x
xxx
xx
xx
=
++=
=
1
245
3545
44
55
000
221
.
01
10
01
x
xxx
xxxx
xx
xx
ªºª º ªº ª º
«»« » «» « »
−− −−
«»« » «» « »
«»« » «» « »
== = +
«»« » «» « »
«»« » «» « »
«»« » «» « »
¬¼ ¬ ¼
¬¼¬ ¼
x Basis for Nul A:
00
21
,.
01
10
01
ª
ºª º
«
»« »
−−
«
»« »
«
»« »
«
»« »
«
»« »
«
»« »
¬
¼¬ ¼
158 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
From this, dim Col A = 3 and dim Nul A = 2.
13. The four vectors span the column space H of a matrix that can be reduced to echelon form:
1324 1324 1324 1324
3915 0057 0057 0057
~~~
2643 0005 0005 0005
412 2 7 0 010 9 0 0 0 5 0 0 0 0
−− −− −− −
ªºªºªºªº
«»«»«»«»
−− − − −
«»«»«»«»
«»«»«»«»
−−
«»«»«»«»
−−
«»«»«»«»
¬¼¬¼¬¼¬¼
Columns 1, 3, and 4 of the original matrix form a basis for H, so dim H = 3.
Note
: Either Exercise 13 or 14 should be assigned because there are always one or two students who
confuse Col A with Nul A. Or, they wrongly connect “set of linear combinationswith parametric vector
form” (of the general solution of Ax = 0).
14. The five vectors span the column space H of a matrix that can be reduced to echelon form:
120 13 120 1 3 120 1 3
13 147 0 113 4 0113 4
~~
21376 033912 000410
3427 9 0221018 0000 0
−− −
ªºªºªº
«»«»«»
−− − − −−
«»«»«»
«»«»«»
−− − − −
«»«»«»
−− −−−
¬¼¬¼¬¼
Columns 1,2 and 4 of the original matrix form a basis for H, so dim H = 3.
15. Col A = R
4
, because A has a pivot in each row and so the columns of A span R
4
. Nul A cannot equal
R
2
, because Nul A is a subspace of R
6
. It is true, however, that Nul A is two-dimensional. Reason: the
equation Ax = 0 has two free variables, because A has six columns and only four of them are pivot
columns.
16. Col A cannot be R
3
because the columns of A have four entries. (In fact, Col A is a 3-dimensional
subspace of R
4
, because the 3 pivot columns of A form a basis for Col A.) Since A has 7 columns and
3 pivot columns, the equation Ax = 0 has 4 free variables. So, dim Nul A = 4.
17. a. True. This is the definition of a B-coordinate vector.
b. False. Dimension is defined only for a subspace. A line must be through the origin in R
n
to be a
subspace of R
n
.
c. True. The sentence before Example 3 concludes that the number of pivot columns of A is the rank
of A, which is the dimension of Col A by definition.
d. True. This is equivalent to the Rank Theorem because rank A is the dimension of Col A.
e. True, by the Basis Theorem. In this case, the spanning set is automatically a linearly independent
set.
18. a. True. This fact is justified in the second paragraph of this section.
b. False. The dimension of Nul A is the number of free variables in the equation Ax = 0.
See Example 2.
c. True, by the definition of rank.
d. True. See the second paragraph after Fig. 1.
e. True, by the Basis Theorem. In this case, the linearly independent set is automatically a spanning
set.
2.9 Solutions 159
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
19. The fact that the solution space of Ax = 0 has a basis of three vectors means that dim Nul A = 3.
Since a 5×7 matrix A has 7 columns, the Rank Theorem shows that rank A = 7 – dim Nul A = 4.
Note
: One can solve Exercises 19–22 without explicit reference to the Rank Theorem. For instance, in
Exercise 19, if the null space of a matrix A is three-dimensional, then the equation Ax = 0 has three free
variables, and three of the columns of A are nonpivot columns. Since a 5×7 matrix has seven columns, A
must have four pivot columns (which form a basis of Col A). So rank A = dim Col A = 4.
20. A 6×8 matrix A has 8 columns. By the Rank Theorem, rank A = 8 – dim Nul A. Since the null space
is three-dimensional, rank A = 5.
21. A 9×8 matrix has 8 columns. By the Rank Theorem, dim Nul A = 8 – rank A. Since the rank is seven,
dim Nul A = 1. That is, the dimension of the solution space of Ax = 0 is one.
22. Suppose that the subspace H = Span{v
1
, …, v
5
} is four-dimensional. If {v
1
, …, v
5
} were linearly
independent, it would be a basis for H. This is impossible, by the statement just before the definition
of dimension in Section 2.9, which essentially says that every basis of a p-dimensional subspace
consists of p vectors. Thus, {v
1
, …, v
5
} must be linearly dependent.
23. A 3×5 matrix A with a two-dimensional column space has two pivot columns. The remaining three
columns will correspond to free variables in the equation Ax = 0. So the desired construction is
possible. There are ten possible locations for the two pivot columns, one of which is
****
0***
00000
ªº
«»
«»
«»
¬¼
. A simple construction is to take two vectors in R
3
that are obviously not
linearly dependent, and put three copies of these two vectors in any order. The resulting matrix will
obviously have a two-dimensional column space. There is no need to worry about whether Nul A has
the correct dimension, since this is guaranteed by the Rank Theorem: dim Nul A = 5 – rank A.
24. A rank 1 matrix has a one-dimensional column space. Every column is a multiple of some fixed
vector. To construct a 3×4 matrix, choose any nonzero vector in R
3
, and use it for one column.
Choose any multiples of the vector for the other tthree columns.
25. The p columns of A span Col A by definition. If dim Col A = p, then the spanning set of p columns is
automatically a basis for Col A, by the Basis Theorem. In particular, the columns are linearly
independent.
26. If columns a
1
, a
3
, a
4,
a
5
, and a
7
of A are linearly independent and if dim Col A = 5, then {a
1
, a
3
, a
4
,a
5
,
a
7
} is a linearly independent set in a 5-dimensional column space. By the Basis Theorem, this set of
five vectors is a basis for the column space.
27. a. Start with B = [b
1
b
p
] and A = [a
1
a
q
], where q > p. For j = 1, …, q, the vector a
j
is
in W. Since the columns of B span W, the vector a
j
is in the column space of B. That is, a
j
= Bc
j
for some vector c
j
of weights. Note that c
j
is in R
p
because B has p columns.
b. Let C = [c
1
c
q
]. Then C is a p×q matrix because each of the q columns is in R
p
.
By hypothesis, q is larger than p, so C has more columns than rows. By a theorem, the columns of
C are linearly dependent and there exists a nonzero vector u in R
q
such that Cu = 0.
c. From part (a) and the definition of matrix multiplication
A = [a
1
a
q
] = [Bc
1
Bc
q
] = BC
160 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
From part (b), Au = (BC)u = B(Cu) = B0 = 0. Since u is nonzero, the columns of A are linearly
dependent.
28. If A contained more vectors than B, then A would be linearly dependent, by Exercise 27, because B
spans W. Repeat the argument with B and A interchanged to conclude that B cannot contain more
vectors than A.
29. [M] Apply the matrix command ref or rref to the matrix [v
1
v
2
x]:
15 14 16 1 0 2
5100 011
~
12 13 11 0 0 0
7173 000
ªºªº
«»«»
−− −
«»«»
«»«»
«»«»
¬¼¬¼
The equation c
1
v
1
+ c
2
v
2
= x is consistent, so x is in the subspace H. Then c
1
= 2 and c
2
= -1. Thus,
the -coordinate of x is (2, -1).
30. [M] Apply the matrix command ref or rref to the matrix [v
1
v
2
v
3
x]:
68 911 100 2
304 2 010 1
~
97 8 17 001 1
433 8 0000
−− −
ªºªº
«»«»
«»«»
«»«»
−−
«»«»
−−
¬¼¬¼
The first three columns of [v
1
v
2
v
3
x] are pivot columns, so v
1
, v
2
and v
3
are linearly independent.
Thus v
1
, v
2
and v
3
form a basis B for the subspace H which they span. View [v
1
v
2
v
3
x] as an
augmented matrix for c
1
v
1
+ c
2
v
2
+ c
3
v
3
= x. The reduced echelon form shows that x is in H and
[x]
B
=
2
1.
1
ªº
«»
«»
«»
¬¼
Notes
: The Study Guide for Section 2.9 contains a complete list of the statements in the Invertible Matrix
Theorem that have been given so far. The format is the same as that used in Section 2.3, with three
columns statements that are logically equivalent for any m×n matrix and are related to existence concepts,
those that are equivalent only for any n×n matrix, and those that are equivalent for any n×p matrix and
are related to uniqueness concepts. Four statements are included that are not in the text’s official list of
statements, to give more symmetry to the three columns.
The Study Guide section also contains directions for making a review sheet for “dimension and
“rank
Chapter 2 Supplementary Exercises 161
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Chapter 2 SUPPLEMENTARY EXERCISES
1. a. True. If A and B are m×n matrices, then B
T
has as many rows as A has columns, so AB
T
is
defined. Also, A
T
B is defined because A
T
has m columns and B has m rows.
b. False. B must have 2 columns. A has as many columns as B has rows.
c. True. The ith row of A has the form (0, …, d
i
, …, 0). So the ith row of AB is (0, …, d
i
, …, 0)B,
which is d
i
times the ith row of B.
d. False. Take the zero matrix for B. Or, construct a matrix B such that the equation Bx = 0 has
nontrivial solutions, and construct C and D so that C D and the columns of C – D satisfy the
equation Bx = 0. Then B(C – D) = 0 and BC = BD.
e. False. Counterexample: A = 10
00
ªº
«»
¬¼
and C = 00
01
ª
º
«
»
¬
¼.
f. False. (A + B)(A – B) = A
2
AB + BA – B
2
. This equals A
2
B
2
if and only if A commutes with B.
g. True. An n×n replacement matrix has n + 1 nonzero entries. The n×n scale and interchange
matrices have n nonzero entries.
h. True. The transpose of an elementary matrix is an elementary matrix of the same type.
i. True. An n×n elementary matrix is obtained by a row operation on I
n
.
j. False. Elementary matrices are invertible, so a product of such matrices is invertible. But not
every square matrix is invertible.
k. True. If A is 3×3 with three pivot positions, then A is row equivalent to I
3
.
l. False. A must be square in order to conclude from the equation AB = I that A is invertible.
m. False. AB is invertible, but (AB)
–1
= B
–1
A
–1
, and this product is not always equal to A
–1
B
–1
.
n. True. Given AB = BA, left-multiply by A
–1
to get B = A
–1
BA, and then right-multiply by A
–1
to
obtain BA
–1
= A
–1
B.
o. False. The correct equation is (rA)
–1
= r
–1
A
–1
, because
(rA)(r
–1
A
–1
) = (rr
–1
)(AA
–1
) = 1¸I = I.
p. True. If the equation Ax =
1
0
0
ªº
«»
«»
«»
¬¼
has a unique solution, then there are no free variables in this
equation, which means that A must have three pivot positions (since A is 3×3). By the Invertible
Matrix Theorem, A is invertible.
2. C = (C
–1
)
–1
= 75 7/25/2
1
64 3 2
2
−−
ªºª º
=
«»« »
−−
¬¼¬ ¼
3.
2
000 000000 000
100, 100100 000
010 010010 100
AA
ªºªºªºªº
«»«»«»«»
== =
«»«»«»«»
«»«»«»«»
¬¼¬¼¬¼¬¼
32
000000 000
100000 000
010100 000
AAA
ªºªºªº
«»«»«»
===
«»«»«»
«»«»«»
¬¼¬¼¬¼
162 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Next,
22 22233
()( ) ( )IAIAA IAA AIAA IAA AA A IA++ =++ ++ =++ −− − =
.
Since A
3
= 0,
2
()( )IAIAA I++ =
.
4. From Exercise 3, the inverse of I – A is probably I + A + A
2
+ ¸ ¸ ¸ + A
n–1
. To verify this, compute
11 11
()() ()
nn nnn
IAIA A IA A AIA A IAA IA
−− −
++ + =++ + ++ + =="" "
If A
n
= 0, then the matrix B = I + A + A
2
+ ¸ ¸ ¸ + A
n–1
satisfies (I – A)B = I. Since I – A and B are
square, they are invertible by the Invertible Matrix Theorem, and B is the inverse of I – A.
5. A
2
= 2AI. Multiply by A: A
3
= 2A
2
A. Substitute A
2
= 2AI: A
3
= 2(2AI)A = 3A – 2I.
Multiply by A again: A
4
= A(3A – 2I) = 3A
2
– 2A. Substitute the identity A
2
= 2AI again:
Finally, A
4
= 3(2AI) – 2A = 4A – 3I.
6. Let
10 01
and .
01 10
AB
ªº ªº
==
«» «»
¬¼ ¬¼
By direct computation, A
2
= I, B
2
= I, and AB =
01
10
ªº
«»
¬¼
= –
BA.
7. (Partial answer in Study Guide) Since A
–1
B is the solution of AX = B, row reduction of [A B] to
[I X] will produce X = A
–1
B. See Exercise 15 in Section 2.2.
[]
13 8 35 1 3 8 3 5 1 3 8 3 5
2411 15~0 2 5 7 5~0 1 3 6 1
12 5 34 0 1 3 6 1 0 2 5 7 5
AB
−− −
ªºªºªº
«»«»«»
=− − −−
«»«»«»
«»«»«»
−− − − −
¬¼¬¼¬¼
138 3 5 1303729 10010 1
~0 1 3 6 1~0 1 0 9 10~0 1 0 9 10
001 5 3 001 5 3 001 5 3
−−
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
−− −− −−
¬¼¬¼¬¼
Thus, A
–1
B =
10 1
910
53
ªº
«»
«»
«»
−−
¬¼
.
8. By definition of matrix multiplication, the matrix A satisfies
12 13
37 11
A
ªºªº
=
«»«»
¬¼¬¼
Right-multiply both sides by the inverse of
12
37
ª
º
«
»
¬
¼. The left side becomes A. Thus,
13 7 2 2 1
11 3 1 4 1
A
−−
ªºª ºª º
==
«»« »« »
−−
¬¼¬ ¼¬ ¼
9. Given
54 7 3
and
23 2 1
AB B
ªº ªº
==
«» «»
¬¼ ¬¼
, notice that ABB
–1
= A. Since det B = 7 – 6 =1,
11
13 54 1 3 313
and ( )
27 232 7 827
BAAB B
−−
−−
ªº ªºªºªº
=== =
«» «»«»«»
−−− −
¬¼ ¬¼¬¼¬¼
2.9 Solutions 163
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Note:
Variants of this question make simple exam questions.
10. Since A is invertible, so is A
T
, by the Invertible Matrix Theorem. Then A
T
A is the product of
invertible matrices and so is invertible. Thus, the formula (A
T
A)
–1
A
T
makes sense. By Theorem 6 in
Section 2.2,
(A
T
A)
–1
¸A
T
= A
–1
(A
T
)
–1
A
T
= A
–1
I = A
–1
An alternative calculation: (A
T
A)
–1
A
T
¸A = (A
T
A)
–1
(A
T
A) = I. Since A is invertible, this equation shows
that its inverse is (A
T
A)
–1
A
T
.
11. a. For i = 1,…, n, p(x
i
) = c
0
+ c
1
x
i
+ ¸ ¸ ¸ +
1
1n
in
cx
=
0
1
row ( ) row ( )
n
ii
c
VV
c
ªº
«»
=
«»
«»
¬¼
c#
.
By a property of matrix multiplication, shown after Example 6 in Section 2.1, and the fact that c
was chosen to satisfy Vc= y,
row ( ) row ( ) row ( )
ii
ii
VV y===ccy
Thus, p(x
i
) = y
i
. To summarize, the entries in Vc are the values of the polynomial p(x) at x
1
, …, x
n
.
b. Suppose x
1
, …, x
n
are distinct, and suppose Vc = 0 for some vector c. Then the entries in c are the
coefficients of a polynomial whose value is zero at the distinct points x
1
, ..., x
n
. However, a
nonzero polynomial of degree n – 1 cannot have n zeros, so the polynomial must be identically
zero. That is, the entries in c must all be zero. This shows that the columns of V are linearly
independent.
c. (Solution in Study Guide) When x
1
, …, x
n
are distinct, the columns of V are linearly independent,
by (b). By the Invertible Matrix Theorem, V is invertible and its columns span R
n
. So, for every
y = (y
1
, …, y
n
) in R
n
, there is a vector c such that Vc = y. Let p be the polynomial whose
coefficients are listed in c. Then, by (a), p is an interpolating polynomial for (x
1
, y
1
), …, (x
n
, y
n
).
12. If A = LU, then col
1
(A) = L¸col
1
(U). Since col
1
(U) has a zero in every entry except possibly the first,
L¸col
1
(U) is a linear combination of the columns of L in which all weights except possibly the first
are zero. So col
1
(A) is a multiple of col
1
(L).
Similarly, col
2
(A) = L¸col
2
(U), which is a linear combination of the columns of L using the first
two entries in col
2
(U) as weights, because the other entries in col
2
(U) are zero. Thus col
2
(A) is a
linear combination of the first two columns of L.
13. a. P
2
= (uu
T
)(uu
T
) = u(u
T
u)u
T
= u(1)u
T
= P, because u satisfies u
T
u = 1.
b. P
T
= (uu
T
)
T
= u
TT
u
T
= uu
T
= P
c. Q
2
= (I – 2P)(I – 2P) = II(2P) – 2PI + 2P(2P)
= I – 4P + 4P
2
= I, because of part (a).
14. Given
0
0
1
ªº
«»
=
«»
«»
¬¼
u
, define P and Q as in Exercise 13 by
164 CHAPTER 2 Matrix Algebra
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
[]
0000 100 000 10 0
00 0 1 0 0 0, 2 0 1 0 20 0 0 0 1 0
1001 001 001 00 1
T
PQI P
ªº ª º ª º ª º ª º
«» « » « » « » « »
== = ===
«» « » « » « » « »
«» « » « » « » « »
¬¼ ¬ ¼ ¬ ¼ ¬ ¼ ¬ ¼
uu
If
1
5
3
ªº
«»
=
«»
«»
¬¼
x
, then
0001 0 10 01 1
0005 0 and 0 1 05 5
0013 3 00 13 3
PQ
ª
ºª º ª º ª ºª º ª º
«
»« » « » « »« » « »
=== =
«
»« » « » « »« » « »
«
»« » « » « »« » « »
−−
¬
¼¬ ¼ ¬ ¼ ¬ ¼¬ ¼ ¬ ¼
xx
.
15. Left-multiplication by an elementary matrix produces an elementary row operation:
121321
~~ ~BEBEEBEEEBC=
so B is row equivalent to C. Since row operations are reversible, C is row equivalent to B.
(Alternatively, show C being changed into B by row operations using the inverse of the E
i
.)
16. Since A is not invertible, there is a nonzero vector v in R
n
such that Av = 0. Place n copies of v into
an n×n matrix B. Then AB = A[v v] = [Av Av] = 0.
17. Let A be a 6×4 matrix and B a 4×6 matrix. Since B has more columns than rows, its six columns are
linearly dependent and there is a nonzero x such that Bx = 0. Thus ABx = A0 = 0. This shows that the
matrix AB is not invertible, by the IMT. (Basically the same argument was used to solve Exercise 22
in Section 2.1.)
Note
: (In the Study Guide) It is possible that BA is invertible. For example, let C be an invertible 4×4
matrix and construct
1
and [ 0].
0
C
ABC
ªº
==
«»
¬¼ Then BA = I
4
, which is invertible.
18. By hypothesis, A is 5×3, C is 3×5, and CA = I
3
. Suppose x satisfies Ax = b. Then CAx = Cb. Since
CA = I, x must be Cb. This shows that Cb is the only solution of Ax = b.
19. [M] Let
.4 .2 .3
.3 .6 .3
.3 .2 .4
A
ªº
«»
=
«»
«»
¬¼
. Then
2
.31 .26 .30
.39 .48 .39
.30 .26 .31
A
ª
º
«
»
=
«
»
«
»
¬
¼
. Instead of computing A
3
next, speed up
the calculations by computing
422 844
.2875 .2834 .2874 .2857 .2857 .2857
.4251 .4332 .4251 , .4285 .4286 .4285
.2874 .2834 .2875 .2857 .2857 .2857
AAA AAA
ªºªº
«»«»
== ==
«»«»
«»«»
¬¼¬¼
To four decimal places, as k increases,
.2857 .2857 .2857
.4286 .4286 .4286
.2857 .2857 .2857
k
A
ªº
«»
«»
«»
¬¼
, or, in rational format,
2/7 2/7 2/7
3/7 3/7 3/7
2/7 2/7 2/7
k
A
ª
º
«
»
«
»
«
»
¬
¼
.
If
2
0.2.3 .29 .18 .18
.1 .6 .3 , then .33 .44 .33
.9 .2 .4 .38 .38 .49
BB
ªº ª º
«» « »
==
«» « »
«» « »
¬¼ ¬ ¼
,
2.9 Solutions 165
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
48
.2119 .1998 .1998 .2024 .2022 .2022
.3663 .3784 .3663 , .3707 .3709 .3707
.4218 .4218 .4339 .4269 .4269 .4271
BB
ªºªº
«»«»
==
«»«»
«»«»
¬¼¬¼
To four decimal places, as k increases,
.2022 .2022 .2022
.3708 .3708 .3708
.4270 .4270 .4270
k
B
ªº
«»
«»
«»
¬¼
, or, in rational format,
18/89 18 /89 18 /89
33/89 33 /89 33/89
38/89 38 /89 38 /89
k
B
ª
º
«
»
«
»
«
»
¬
¼
.
20. [M] The 4×4 matrix A
4
is the 4×4 matrix of ones, minus the 4×4 identity matrix. The MATLAB
command is A4 = ones(4) – eye(4). For the inverse, use inv(A4).
1
44
0111 2/3 1/3 1/3 1/3
10 11 1/3 2/3 1/3 1/3
,
1101 1/3 1/3 2/3 1/3
1110 1/3 1/3 1/3 2/3
AA
ªºª º
«»« »
«»« »
==
«»« »
«»« »
¬¼¬ ¼
1
55
01111 3/4 1/4 1/4 1/4 1/4
10 111 1/4 3/4 1/4 1/4 1/4
,
11011 1/4 1/4 3/4 1/4 1/4
11101 1/4 1/4 1/4 3/4 1/4
11110 1/4 1/4 1/4 1/4 3/4
AA
ªºª º
«»« »
«»« »
«»« »
==
«»« »
«»« »
«»« »
¬¼¬ ¼
1
66
011111 4/5 1/5 1/5 1/5 1/5 1/5
10 1111 1/5 4/5 1/5 1/5 1/5 1/5
110111 1/5 1/5 4/5 1/5 1/5 1/5
,
111011 1/5 1/5 1/5 4/5 1/5 1/5
111101 1/5 1/5 1/5 1/5 4/5 1/5
111110 1/5 1/5 1/5 1/5 1/5 4/5
AA
ªºª º
«»« »
«»« »
«»« »
==
«»« »
«»« »
«»« »
«»« »
«»« »
¬¼¬ ¼
The construction of A
6
and the appearance of its inverse suggest that the inverse is related to I
6
. In
fact,
1
66
AI
+
is 1/5 times the 6×6 matrix of ones. Let J denotes the n×n matrix of ones. The
conjecture is:
A
n
= J – I
n
and
11
1
nn
AJI
n
=⋅−
Proof: (Not required) Observe that J
2
= nJ and A
n
J = (J – I )J = J
2
J = (n – 1) J. Now compute
A
n
((n – 1)
–1
JI) = (n – 1)
–1
A
n
J A
n
= J – (JI) = I
Since A
n
is square, A
n
is invertible and its inverse is (n – 1)
–1
JI.
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
167
3.1 SOLUTIONS
Notes:
Some exercises in this section provide practice in computing determinants, while others allow the
student to discover the properties of determinants which will be studied in the next section. Determinants
are developed through the cofactor expansion, which is given in Theorem 1. Exercises 33–36 in this
section provide the first step in the inductive proof of Theorem 3 in the next section.
1. Expanding along the first row:
30 4 32 22 23
23 2 3 0 4 3(13)4(10)1
51 01 05
05 1
=+=+=
−−
Expanding along the second column:
12 22 32
30 4 22 34 34
23 2 (1) 0 (1) 3 (1) 5 3(3)5(2)1
01 01 22
05 1
+++
=−⋅ +−⋅ +−⋅ =−−− =
−−
2. Expanding along the first row:
051 30 40 4 3
4300 5 1 5(4)1(22)2
41 21 2 4
241
−−
=+=+=
Expanding along the second column:
12 22 32
051 40 01 01
430(1)5 (1)(3) (1)4 5(4)3(2)4(4)2
21 21 40
241
++ +
=−⋅ +−⋅+−⋅ =−−=
3. Expanding along the first row:
243 12 32 31
3122 (4) 3 2(9)4(5)(3)(11)5
41 11 14
141
=−− +=++=
−−
168 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Expanding along the second column:
12 22 32
243 32 23 23
312(1)(4) (1)1 (1)4 4(5)1(5)4(5)5
11 11 32
141
+++
=−⋅+−⋅ +−⋅ =+−− =
−−
4. Expanding along the first row:
135 11 21 21
2111 3 5 1(2)3(1)5(5)20
42 32 34
342
=+=−− +=
Expanding along the second column:
12 22 32
135 21 15 15
211 (1) 3 (1) 1 (1) 4 3(1)1(13)4(9)20
32 32 21
342
+++
=−⋅ +−⋅ +−⋅ =+−−=
5. Expanding along the first row:
23 4 05 45 40
40 5 2 3 (4) 2(5)3(1)4(4) 23
16 56 51
51 6
=+=−−−=
6. Expanding along the first row:
524 35 05 03
0355 (2) 4 5(1)2(10)4(6)1
47 27 24
247
−−
=−− +=++=
−−
7. Expanding along the first row:
430 52 62 65
652 4 3 0 4(1)3(0)4
73 93 97
973
=+==
8. Expanding along the first row:
816 03 43 4 0
4038 1 6 8(6)1(11)6(8)11
25 35 3 2
325
=+=+=
−−
9. First expand along the third row, then expand along the first row of the remaining matrix:
3.1 Solutions 169
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
31 13
600 5 00 5
172 5 72
(1) 27 2 5 2(1) 5 10(1) 10
200 0 31
31 8
831 8
++
=−⋅ −=⋅− ⋅ ==
10. First expand along the second row, then expand along either the third row or the second column of
the remaining matrix.
23
1252 122
0030
(1) 32 6 5
2675 504
5044
+
=−⋅ −
−−
31 33
22 1 2
(3)(1) 5 (1) 4 (3)(5(2) 4(2)) 6
65 2 6
++
§·
−−
=−− ⋅ +−⋅ =+=
¨¸
−−
©¹
or
23
1252 122
0030
(1) 32 6 5
2675 504
5044
+
=−⋅ −
−−
12 22
25 12
(3)(1) (2) (1) (6)
54 54
++
§·
=−− ⋅ +−⋅
¨¸
©¹
()
(3)2(17) 6(6) 6=−−=
11. There are many ways to do this determinant efficiently. One strategy is to always expand along the
first column of each matrix:
11 11
3584 23 7
0237 15
(1) 3 0 1 5 3(1) (2)
0015 02
00 2
0002
++
−−
−−
=−⋅ =⋅− ⋅−
= 3(–2)(2) = –12
12. There are many ways to do this determinant efficiently. One strategy is to always expand along the
first row of each matrix:
11 11
4000 10 0
7100 30
(1) 4 6 3 0 4(1) (1)
2630 43
84 3
5843
++
=−⋅ =⋅− ⋅−
−−
−−
= 4(–1)( –9) = 36
170 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
13. First expand along either the second row or the second column. Using the second row,
23
40 7 3 5 40 3 5
00 2 0 0 73 4 8
(1) 2
73 6 4 8 50 2 3
50 5 2 3 00 1 2
00 9 1 2
+
−−
=−⋅
−−
Now expand along the second column to find:
23 22
40 3 5 435
73 4 8
(1) 2 2(1) 35 2 3
50 2 3 012
00 1 2
++
§·
¨¸
−⋅ =−− ⋅
¨¸
¨¸
©¹
Now expand along either the first column or third row. The first column is used below.
22
435
2(1) 35 2 3
012
+
§·
¨¸
−− ⋅
¨¸
¨¸
©¹
11 21
23 35
6(1) 4 (1) 5 (6)(4(1) 5(1)) 6
12 12
++
§·−−
=−− ⋅ +−⋅ =−−=
¨¸
−−
©¹
14. First expand along either the fourth row or the fifth column. Using the fifth column,
35
63240 63 24
90410 90 41
(1) 1
85671 30 00
30000 42 32
42320
+
=−⋅
Now expand along the third row to find:
35 31
63 24 324
90 41
(1) 1 1(1) 30 4 1
30 00 232
42 32
++
§·
¨¸
−⋅ =−⋅ −
¨¸
¨¸
©¹
Now expand along either the first column or second row. The first column is used below.
31
324
1(1) 30 4 1
232
+
§·
¨¸
−⋅ −
¨¸
¨¸
©¹
11 31
41 24
3(1) 3 (1) 2 (3)(3( 11) 2(18)) 9
32 41
++
§·
=−⋅ +−⋅ =+=
¨¸
©¹
15.
30 4
23 2
05 1
=
(3)(3)(–1) + (0)(2)(0) + (4)(2)(5) – (0)(3)(4) – (5)(2)(3) – (–1)(2)(0)
= –9 + 0 + 40 – 0 – 30 –0 = 1
3.1 Solutions 171
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
16.
051
430
241
=
(0)(–3)(1) + (5)(0)(2) + (1)(4)(4) – (2)(–3)(1) – (4)(0)(0) – (1)(4)(5)
= 0 + 0 + 16 – (–6) – 0 – 20 = 2
17.
243
312
141
=
(2)(1)(–1) + (–4)(2)(1) + (3)(3)(4) – (1)(1)(3) – (4)(2)(2) – (–1)(3)(–4)
= –2 + (–8) + 36 – 3 – 16 – 12 = –5
18.
135
211
342
=
(1)(1)(2) + (3)(1)(3) + (5)(2)(4) – (3)(1)(5) – (4)(1)(1) – (2)(2)(3)
= 2 + 9 + 40 – 15 – 4 – 12 = 20
19.
,
ab
ad bc
cd
=
()
cd
cb da ad bc
ab
==−−
The row operation swaps rows 1 and 2 of the matrix, and the sign of the determinant is reversed.
20.
,
ab
ad bc
cd
=
()() ()
ab
akd kcb kad kbc kad bc
kc kd
===
The row operation scales row 2 by k, and the determinant is multiplied by k.
21.
3418 20 2,
56
==
34
3(6 4 ) (5 3 )4 2
53 64
kk
kk
=+ +=
++
The row operation replaces row 2 with k times row 1 plus row 2, and the determinant is unchanged.
22.
,
ab
ad bc
cd
=
()()
akc bkd
akcdcbkd ad kcdbckcd adbc
cd
++
=+ +=+−− =
The row operation replaces row 1 with k times row 2 plus row 1, and the determinant is unchanged.
23.
111
3841(4)1(2)1(7)5,
232
−−=+=
384 (4)(2)(7)5
232
kkk
kkk k
−−=+=
The row operation scales row 1 by k, and the determinant is multiplied by k.
24.
322 (2) (6) (3)2 6 3,
656
abc
abc abc
=+=+
172 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
322
3(6 5 ) 2(6 6 ) 2(5 6 ) 2 6 3
656
abc b c a c a b a b c
=−− −+=+
The row operation swaps rows 1 and 2 of the matrix, and the sign of the determinant is reversed.
25. Since the matrix is triangular, by Theorem 2 the determinant is the product of the diagonal entries:
100
010 (1)(1)(1)1
01
k
==
26. Since the matrix is triangular, by Theorem 2 the determinant is the product of the diagonal entries:
100
010 (1)(1)(1)1
01
k
==
27. Since the matrix is triangular, by Theorem 2 the determinant is the product of the diagonal entries:
00
010 ()(1)(1)
001
k
kk
==
28. Since the matrix is triangular, by Theorem 2 the determinant is the product of the diagonal entries:
100
00(1)()(1)
001
kkk
==
29. A cofactor expansion along row 1 gives
010 10
100 1 1
01
001
==
30. A cofactor expansion along row 1 gives
001 01
0101 1
10
100
==
31. A 3 × 3 elementary row replacement matrix looks like one of the six matrices
100 100 100 100 10 1 0
10,010,010,01k,010,010
001 01 0 1 001 001 001
kk
k
kk
ªºªºªºªºªºªº
«»«»«»«»«»«»
«»«»«»«»«»«»
«»«»«»«»«»«»
¬¼¬¼¬¼¬¼¬¼¬¼
In each of these cases, the matrix is triangular and its determinant is the product of its diagonal
entries, which is 1. Thus the determinant of a 3 × 3 elementary row replacement matrix is 1.
3.1 Solutions 173
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
32. A 3 × 3 elementary scaling matrix with k on the diagonal looks like one of the three matrices
00 100 100
010,0 0,010
001 001 00
k
k
k
ªºªºªº
«»«»«»
«»«»«»
«»«»«»
¬¼¬¼¬¼
In each of these cases, the matrix is triangular and its determinant is the product of its diagonal
entries, which is k. Thus the determinant of a 3 × 3 elementary scaling matrix with k on the diagonal
is k.
33.
01
,
10
Eªº
=«»
¬¼ ,
ab
A
cd
ªº
=«»
¬¼
cd
EA
ab
ªº
=«»
¬¼
det E = –1, det A = ad bc,
det EA = cb da = –1(ad bc) = (det E)(det A)
34.
10
,
0
E
k
ªº
=«»
¬¼ ,
ab
A
cd
ªº
=«»
¬¼
ab
EA
kc kd
ªº
=«»
¬¼
det E = k, det A = ad bc,
det EA = a(kd) – (kc)b = k(ad bc) = (det E)(det A)
35.
1,
01
k
Eªº
=«»
¬¼ ,
ab
A
cd
ªº
=«»
¬¼
akc bkd
EA
cd
++
ªº
=«»
¬¼
det E = 1, det A = ad bc,
det EA = (a + kc)d c(b + kd) = ad + kcd bc kcd = 1(ad bc) = (det E)(det A)
36.
10
,
1
E
k
ªº
=«»
¬¼ ,
ab
A
cd
ªº
=«»
¬¼
ab
EA
ka c kb d
ªº
=«»
++
¬¼
det E = 1, det A = ad bc,
det EA = a(kb + d) – (ka + c)b = kab + ad kab bc = 1(ad bc) = (det E)(det A)
37.
31
,
42
Aªº
=«»
¬¼
15 5
5,
20 10
Aªº
=«»
¬¼
det A = 2, det 5A = 50 5det A
38.
,
ab
A
cd
ªº
=«»
¬¼
,
ka kb
kA
kc kd
ªº
=«»
¬¼
det A = ad bc,
22
det ( )( ) ( )( ) ( ) det
kA ka kd kb kc k ad bc k A
===
39. a. True. See the paragraph preceding the definition of the determinant.
b . False. See the definition of cofactor, which precedes Theorem 1.
40. a. False. See Theorem 1.
b . False. See Theorem 2.
174 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
41. The area of the parallelogram determined by
3,
0
ª
º
=
«
»
¬
¼
u
1,
2
ª
º
=
«
»
¬
¼
v
u + v, and 0 is 6, since the base of
the parallelogram has length 3 and the height of the parallelogram is 2. By the same reasoning, the
area of the parallelogram determined by
3,
0
ª
º
=
«
»
¬
¼
u
,
2
x
ª
º
=
«
»
¬
¼
x
u + x, and 0 is also 6.
Also note that
[]
31
det det 6,
02
ªº
==
«»
¬¼
uv
and
[]
3
det det 6.
02
x
ªº
==
«»
¬¼
ux
The determinant of the
matrix whose columns are those vectors which define the sides of the parallelogram adjacent to 0 is
equal to the area of the parallelogram
42. The area of the parallelogram determined by
a
b
ª
º
=
«
»
¬
¼
u
,
0
c
ª
º
=
«
»
¬
¼
v
, u + v, and 0 is cb, since the base of
the parallelogram has length c and the height of the parallelogram is b.
Also note that
[]
det det 0
ac
cb
b
ªº
==
«»
¬¼
uv
, and
[]
det det .
0
ca
cb
b
ªº
==
«»
¬¼
vu
The determinant of
the matrix whose columns are those vectors which define the sides of the parallelogram adjacent to 0
either is equal to the area of the parallelogram or is equal to the negative of the area of the
parallelogram.
43. [M] Answers will vary. The conclusion should be that det (A + B) det A + det B.
44. [M] Answers will vary. The conclusion should be that det (AB) = (det A)(det B).
X
V
UU
X2
X2
X1X1
2
12
1
11224
X
2
a
b
c
X
1
U
V
3.2 Solutions 175
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
45. [M] Answers will vary. For 4 × 4 matrices, the conclusions should be that
det det ,
T
AA
=
det(–A) = det A, det(2A) = 16det A, and
4
det (10 ) 10 det
AA
=
. For 5 × 5 matrices, the conclusions
should be that
det det ,
T
AA
=
det(–A) = –det A, det(2A) = 32det A, and
5
det (10 ) 10 det .
AA
=
For 6
× 6 matrices, the conclusions should be that
det det
T
AA
=
, det(–A) = det A, det(2A) = 64det A, and
6
det (10 ) 10 det .
AA
=
46. [M] Answers will vary. The conclusion should be that
1
det 1/ det .
AA
=
3.2 SOLUTIONS
Notes:
This section presents the main properties of the determinant, including the effects of row
operations on the determinant of a matrix. These properties are first studied by examples in Exercises 1–
20. The properties are treated in a more theoretical manner in later exercises. An efficient method for
computing the determinant using row reduction and selective cofactor expansion is presented in this
section and used in Exercises 11–14. Theorems 4 and 6 are used extensively in Chapter 5. The linearity
property of the determinant studied in the text is optional, but is used in more advanced courses.
1. Rows 1 and 2 are interchanged, so the determinant changes sign (Theorem 3b.).
2. The constant 2 may be factored out of the Row 1 (Theorem 3c.).
3. The row replacement operation does not change the determinant (Theorem 3a.).
4. The row replacement operation does not change the determinant (Theorem 3a.).
5.
156156156
1440120123
279033003
−−
−− ===
−−
6.
15 31 53 153 153
3330181260326032(6)(3)18
213 7 0 3 1 0 3 1 0 0 1
−− − −
=====
−−−
7.
1302130213021302
2574017801780178
0
35210425003027003027
112304250030270000
−−
====
−−
−− − −
8.
1334133 41334
0125012 50125
0
2543012 50000
3752024100000
−−
−−
===
−−
−−− −
176 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
9.
1130 1130 1130 1130
015401540154 0154 (3) 3
12 850 155 00 0 1 00 3 5
312302730035 0001
−− −− − −−
== ==− − =
−−
−− −−
10.
131021310213102
024160241602416
262390003500035
373870208100477
3552704821300001
−− −− −−
−− −−− −−
===
−−
−− − −
13 1 0 2
02 4 1 6
(24) 24
00 4 7 7
00 0 3 5
00 0 0 1
−−
−−
=−− =
−−
11. First use a row replacement to create zeros in the second column, and then expand down the second
column:
25 3 1 25 3 1 313
30 1 3 30 1 3 56 4 9
60 4 9 60 4 9 021
410 4 1 00 2 1
−− −−
−−
==−− −
−− −
−−
Now use a row replacement to create zeros in the first column, and then expand down the first
column:
313 313 23
56 4 9 50 2 3 (5)(3) (5)(3)(8)120
21
021 021
−−
−− − =−− ==−−=
12. First use a row replacement to create zeros in the fourth column, and then expand down the fourth
column:
1230 12 30 12 3
3430 34 30 33 4 3
5466 30 20 30 2
4243 42 43
−−
==
−−
−−
Now use a row replacement to create zeros in the first column, and then expand down the first
column:
12 3 1 2 3 10 12
33 4 3 30 10 12 3(1) 3(1)(38)114
611
30 2 0 6 11
−−
===− − =
−−
−− −
13. First use a row replacement to create zeros in the fourth column, and then expand down the fourth
column:
3.2 Solutions 177
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
2541 2541 032
4762 0320 16 2 4
6240 6240 677
6770 6770
−−
−−
==−−
−− −−
−−
Now use a row replacement to create zeros in the first column, and then expand down the first
column:
032 032 32
16 2 4 16 2 4 (1)(6) (1)(6)(1)6
53
677 053
−− −−
−−
−−=−−=−− =−− =
14. First use a row replacement to create zeros in the third column, and then expand down the third
column:
3214 3214 133
1303 1303
19 0 0
3428 9000 344
3404 3404
−− − −−
−−
==
−− −
−−
Now expand along the second row:
133 33
19 0 0 1((9)) (1)(9)(0)0
44
344
=− − ==
15.
55(7)35
555
abc abc
def def
ghi ghi
===
16.
333 3 3(7)21
abc abc
def def
gh i ghi
===
17.
7
abc abc
gh i de f
def ghi
==
18.
(7) 7
ghi abc abc
abc gh i de f
def def ghi
§·
¨¸
==−− =−− =
¨¸
¨¸
©¹
19.
222 2222 2(7)14
abcabcabc
da eb f c d e f d e f
ghighighi
++ += = ==
178 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
20.
7
ad be c f a b c
defdef
ghighi
+++
==
21. Since
230
134 10
121
=−≠
, the matrix is invertible.
22. Since
50 1
1320
053
−−=
, the matrix is not invertible.
23. Since
2008
1750
0
3860
0754
−−
=
, the matrix is not invertible.
24. Since
473
605110
726
−−
=
, the columns of the matrix form a linearly independent set.
25. Since
787
450 10
675
=−≠
−−
, the columns of the matrix form a linearly independent set.
26. Since
3220
5610
0
6030
4703
−−
=
, the columns of the matrix form a linearly dependent set.
27. a. True. See Theorem 3.
b . True. See the paragraph following Example 2.
c . True. See the paragraph following Theorem 4.
d . False. See the warning following Example 5.
28. a. True. See Theorem 3.
b . False. See the paragraphs following Example 2.
c . False. See Example 3.
d . False. See Theorem 5.
29. By Theorem 6,
555
det (det ) ( 2) 32
BB
===
.
3.2 Solutions 179
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
30. Suppose the two rows of a square matrix A are equal. By swapping these two rows, the matrix A is
not changed so its determinant should not change. But since swapping rows changes the sign of the
determinant, det A = – det A. This is only possible if det A = 0. The same may be proven true for
columns by applying the above result to
T
A
and using Theorem 5.
31. By Theorem 6,
1
(det )(det ) det 1
AA I
==
, so
1
det 1/ det .
AA
=
32. By factoring an r out of each of the n rows,
det ( ) det .
n
rA r A
=
33. By Theorem 6, det AB = (det A)(det B) = (det B)(det A) = det BA.
34. By Theorem 6 and Exercise 31,
111
det ( ) (det )(det )(det ) (det )(det )(det )
PAP P A P P P A
−−
==
1
(det ) (det ) 1det
det
PAA
P
§·
==
¨¸
©¹
det
A
=
35. By Theorem 6 and Theorem 5,
2
det (det )(det ) (det ) .
TT
UU U U U
==
Since ,
T
UU I=
det det 1
T
UU I
==
, so
2
(det ) 1.
U
=
Thus det U = ±1.
36. By Theorem 6
44
det (det )
AA
=
. Since
4
det 0
A
=
, then
4
(det ) 0
A
=
. Thus det A = 0, and A is not
invertible by Theorem 4.
37. One may compute using Theorem 2 that det A = 3 and det B = 8, while
60
17 4
AB ªº
=«»
¬¼
. Thus
det AB = 24 = 3 × 8 = (det A)(det B).
38. One may compute that det A = 0 and det B = –2, while
60
20
AB
ª
º
=
«
»
¬
¼
. Thus
det AB = 0 = 0 × 2 = (det A)(det B).
39. a. By Theorem 6, det AB = (det A)(det B) = 4 × –3 = –12.
b . By Exercise 32,
3
det 5 5 det 125 4 500
AA
==×=
.
c . By Theorem 5,
det det 3
T
BB
==
.
d . By Exercise 31,
1
det 1/ det 1/ 4
AA
==
.
e . By Theorem 6,
333
det (det ) 4 64
AA
===
.
40. a. By Theorem 6, det AB = (det A)(det B) = –1 × 2 = –2.
b . By Theorem 6,
555
det (det ) 2 32
BB
===
.
c . By Exercise 32,
4
det 2 2 det 16 1 16
AA
==×− =
.
d . By Theorems 5 and 6,
det (det )(det ) (det )(det ) 1 1 1
TT
AA A A A A
=== × − =
.
180 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
e . By Theorem 6 and Exercise 31,
11
det (det )(det )(det ) (1/ det )(det )(det ) det 1
BAB B A B B A B A
−−
== ==
.
41. det A = (a + e)d c(b + f) = ad + ed bc cf = (ad bc) + (ed cf) = det B + det C.
42.
1
det ( ) (1 )(1 ) 1 det det
1
ab
AB adcb adadcb Aad B
cd
+
+= =+ +=+ + + =+++
+
, so
det (A + B) = det A + det B if and only if a + d = 0.
43. Compute det A by using a cofactor expansion down the third column:
11 13 2 2 23 3 3 33
det ( )det ( )det ( )detAuv A uv A uv A=+ +++
1132 233 331132 233 33
det det det det det detuAu AuAvAv AvA=+++
det det
BC
=+
44. By Theorem 5,
det det ( ) .
T
AE AE
=
Since
()
TTT
AE E A
=
,
det det( ).
TT
AE E A
=
Now
T
E
is itself
an elementary matrix, so by the proof of Theorem 3,
det ( ) (det )(det ).
TT T T
EA E A
=
Thus it is true
that
det (det )(det ),
TT
AE E A
=
and by applying Theorem 5, det AE = (det E)(det A).
45. [M] Answers will vary, but will show that
det T
AA
always equals 0 while
det T
AA
should seldom
be zero. To see why
T
AA
- should not be invertible (and thus
det 0
T
AA
=
), let A be a matrix with
more columns than rows. Then the columns of A must be linearly dependent, so the equation Ax = 0
must have a non-trivial solution x. Thus
() () ,
TT T
AA A A A
===
xx00
and the equation
()
T
AA
=
x0
has a
non-trivial solution. Since
T
AA
is a square matrix, the Invertible Matrix Theorem now says that
T
AA
is not invertible. Notice that the same argument will not work in general for ,
T
AA since
T
A
has more rows than columns, so its columns are not automatically linearly dependent.
46. [M] One may compute for this matrix that det A = –4008 and cond A 16.3. Note that this is the
2
A
condition number, which is used in Section 2.3. Since det A 0, it is invertible and
1
837 181 207 297
750 574 30 654
1
171 195 87 1095
4008
21 187 81 639
A
−−
ªº
«»
−−
«»
=«»
−−
«»
−−
¬¼
The determinant is very sensitive to scaling, as
4
det10 10 det 40,080,000AA==
and
det 0.1A=
4
(0.1) det 0.4008.A=
The condition number is not changed at all by scaling:
cond(10A) = cond(0.1A) = cond A 16.3. When
4
AI=
, det A=1 and cond A = 1. As before the
determinant is sensitive to scaling:
4
det10 10 det 10,000AA==
and
4
det 0.1 (0.1) det 0.0001.AA==
Yet the condition number is not changed by scaling: cond(10A) = cond(0.1A) = cond A = 1.
3.3 Solutions 181
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
3.3 SOLUTIONS
Notes:
This section features several independent topics from which to choose. The geometric
interpretation of the determinant (Theorem 10) provides the key to changes of variables in multiple
integrals. Students of economics and engineering are likely to need Cramer’s Rule in later courses.
Exercises 1–10 concern Cramer’s Rule, exercises 11–18 deal with the adjugate, and exercises 19–32
cover the geometric interpretation of the determinant. In particular, Exercise 25 examines students’
understanding of linear independence and requires a careful explanation, which is discussed in the Study
Guide. The Study Guide also contains a heuristic proof of Theorem 9 for 2 × 2 matrices.
1. The system is equivalent to Ax = b, where
57
24
A
ª
º
=
«
»
¬
¼
and
3
1
ª
º
=
«
»
¬
¼
b
. We compute
12 12
37 53
() , () ,det 6,det () 5,det () 1,
14 21
AA AAA
ªº ªº
=====
«» «»
¬¼ ¬¼
bb bb
12
12
det ( ) det ( )51
,.
det 6 det 6
AA
xx
AA
=== =
bb
2. The system is equivalent to Ax = b, where
41
52
A
ª
º
=
«
»
¬
¼
and
6
7
ª
º
=
«
»
¬
¼
b
. We compute
12 12
61 46
() , () ,det 3,det () 5,det () 2,
72 57
AA AAA
ªº ªº
=====
«» «»
¬¼ ¬¼
bb bb
12
12
det ( ) det ( )52
,.
det 3 det 3
AA
xx
AA
=== =
bb
3. The system is equivalent to Ax = b, where
32
56
A
ª
º
=
«
»
¬
¼
and
7
5
ª
º
=
«
»
¬
¼
b
. We compute
12 12
72 37
() , () ,det 8,det () 32,det () 20,
56 55
AA AAA
ªº ªº
=====
«» «»
−−
¬¼ ¬¼
bb bb
12
12
det ( ) det ( )32 20 5
4, .
det 8 det 8 2
AA
xx
AA
==== ==
bb
4. The system is equivalent to Ax = b, where
53
31
A
ª
º
=
«
»
¬
¼
and
9
5
ª
º
=
«
»
¬
¼
b
. We compute
12 12
93 59
() , () ,det 4,det () 6,det () 2,
51 35
AA AAA
ªº ªº
=====
«» «»
−−
¬¼ ¬¼
bb bb
12
12
det ( ) det ( )63 21
,.
det 4 2 det 4 2
AA
xx
AA
======
−−
bb
182 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
5. The system is equivalent to Ax = b, where
210
30 1
012
A
ª
º
«
»
=
«
»
«
»
¬
¼
and
7
8
3
ª
º
«
»
=
«
»
«
»
¬
¼
b
. We compute
12 3
710 2 70 21 7
() 8 0 1, () 3 8 1, () 3 0 8,
312 0 32 01 3
AA A
ªºª ºª º
«»« »« »
==−− =−−
«»« »« »
«»« »« »
−−
¬¼¬ ¼¬ ¼
bb b
12 3
det 4,det ( ) 6,det ( ) 16,det ( ) 14,AA A A== = =bb b
3
12
12 3
det ( )
det ( ) det ( )63 16 14 7
,4, .
det 4 2 det 4 det 4 2
A
AA
xx x
AA A
==== =====
b
bb
6. The system is equivalent to Ax = b, where
211
102
313
A
ª
º
«
»
=
«
»
«
»
¬
¼
and
4
2
2
ª
º
«
»
=
«
»
«
»
¬
¼
b
. We compute
12 3
411 2 41 21 4
() 2 0 2, () 1 2 2, () 1 0 2,
213 3 23 31 2
AA A
ªºª ºª º
«»« »« »
===
«»« »« »
«»« »« »
−−
¬¼¬ ¼¬ ¼
bb b
123
det 4, det ( ) 16, det ( ) 52, det ( ) 4,AA A A====bbb
3
12
123
det ( )
det ( ) det ( )16 52 4
4, 13, 1.
det 4 det 4 det 4
A
AA
xxx
AAA
−−
=========
b
bb
7. The system is equivalent to Ax = b, where
64
92
s
A
s
ª
º
=
«
»
¬
¼
and
5
2
ª
º
=
«
»
¬
¼
b
. We compute
12 12
54 6 5
() , () ,det () 10 8,det () 12 45.
22 9 2
s
AA AsAs
s
ªº ªº
== =+=− −
«» «»
−−
¬¼ ¬¼
bb bb
Since
22
det 12 36 12( 3) 0As s==−≠
for 3s±, the system will have a unique solution when
3s±. For such a system, the solution will be
12
12
22 22
det ( ) det ( )10 8 5 4 12 45 4 15
,.
det det
12( 3) 6( 3) 12( 3) 4( 3)
AAss ss
xx
AA
ss ss
++ −− −
== = = = =
−− −−
bb
8. The system is equivalent to Ax = b, where
35
95
s
A
s
ª
º
=
«
»
¬
¼
and
3
2
ª
º
=
«
»
¬
¼
b
. We compute
12 1 2
35 33
() , () ,det () 15 10,det () 6 27.
25 92
s
AA AsAs
s
ªº ªº
== =+=
«» «»
¬¼ ¬¼
bb b b
Since
22
det 15 45 15( 3) 0As s=+= +
for all values of s, the system will have a unique solution for
all values of s. For such a system, the solution will be
12
12
22 22
det ( ) det ( )15 10 3 2 627 29
,.
det det
15( 3) 3( 3) 15( 3) 5( 3)
AAss ss
xx
AA
ss ss
++ −−
== = = = =
++ ++
bb
3.3 Solutions 183
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
9. The system is equivalent to Ax = b, where
2
36
ss
A
s
ª
º
=
«
»
¬
¼
and
1
4
ª
º
=
«
»
¬
¼
b
. We compute
12 12
12 1
() ,() ,det()2,det()4 3.
46 34
ss
AA AsAs
s
−−
ªº ªº
== ==+
«» «»
¬¼ ¬¼
bb bb
Since
2
det 6 6 6 ( 1) 0As sss=+= +=
for s = 0, –1, the system will have a unique solution when s 0,
–1. For such a system, the solution will be
12
12
det ( ) det ( )21 43
,.
det 6 ( 1) 3( 1) det 6 ( 1)
AAss
xx
Ass s Ass
+
=== = =
++ +
bb
10. The system is equivalent to Ax = b, where
21
36
s
A
ss
ª
º
=
«
»
¬
¼
and
1
2
ª
º
=
«
»
¬
¼
b
. We compute
12 1 2
11 21
() , () ,det () 6 2,det () .
26 3 2
s
AA AsAs
ss
ªº ªº
== ==
«» «»
¬¼ ¬¼
bb b b
Since
2
det 12 3 3 (4 1) 0Assss===
for s = 0,1/4, the system will have a unique solution when
s 0,1/4. For such a system, the solution will be
12
12
det ( ) det ( )62 1
,.
det 3 (4 1) det 3 (4 1) 3(4 1)
AAss
xx
Ass Ass s
== = = =
−−
bb
11. Since det A = 3 and the cofactors of the given matrix are
11
00 0,
11
C==
12
30 3,
11
C==
13
30 3,
11
C==
21
21
1,
11
C
−−
==
22
01 1,
11
C
==
23
02
2,
11
C
==
31
21
0,
00
C
−−
==
32
01 3,
30
C
==
33
02
6,
30
C
==
010
adj 3 1 3
326
A
ªº
«»
=−−
«»
«»
¬¼
and
1
01/30
1adj 1 1/ 3 1 .
det 12/32
AA
A
ª
º
«
»
==−− −
«
»
«
»
¬
¼
12. Since det A = 5 and the cofactors of the given matrix are
11
21 1,
10
C
==
12
210,
00
C==
13
22
2,
01
C
==
21
13 3,
10
C==
22
13 0,
00
C==
23
11 1,
01
C==
31
13 7,
21
C==
32
13 5,
21
C==
33
11 4,
22
C==
137
adj 0 0 5
214
A
ªº
«»
=«»
«»
−−
¬¼
and
1
1/5 3/5 7/5
1adj 0 0 1 .
det 2/5 1/5 4/5
AA
A
ª
º
«
»
==
«
»
«
»
−−
¬
¼
184 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
13. Since det A = 6 and the cofactors of the given matrix are
11
01 1,
11
C==
12
11 1,
21
C==
13
10 1,
21
C==
21
54 1,
11
C==
22
34 5,
21
C==
23
35 7,
21
C==
31
54 5,
01
C==
32
341,
11
C==
33
35 5,
10
C==
115
adj 1 5 1
17 5
A
−−
ªº
«»
=
«»
«»
¬¼
and
1
1/6 1/6 5/6
1adj 1/ 6 5/ 6 1/ 6 .
det 1/6 7/ 6 5/ 6
AA
A
−−
ª
º
«
»
==
«
»
«
»
¬
¼
14. Since det A = –1 and the cofactors of the given matrix are
11
215,
34
C==
12
012,
24
C==
13
02 4,
23
C==
21
67 3,
34
C==
22
37 2,
24
C==
23
36 3,
23
C==
31
67 8,
21
C==
32
37 3,
01
C==
33
36 6,
02
C==
538
adj 2 2 3
436
A
−−
ªº
«»
=−−
«»
«»
¬¼
and
1
538
1adj 2 2 3 .
det 436
AA
A
ª
º
«
»
==
«
»
«
»
−−
¬
¼
15. Since det A = 6 and the cofactors of the given matrix are
11
10 2,
32
C==
12
10 2,
22
C
==
13
11 1,
23
C
==
21
00 0,
32
C==
22
30 6,
22
C==
23
30 9,
23
C==
31
00 0,
10
C==
32
30 0,
10
C==
33
30 3,
11
C==
200
adj 2 6 0
193
A
ªº
«»
=«»
«»
−−
¬¼
and
1
1/3 0 0
1adj 1/ 3 1 0 .
det 1/6 3/2 1/2
AA
A
ª
º
«
»
==
«
»
«
»
−−
¬
¼
16. Since det A = –9 and the cofactors of the given matrix are
11
31 9,
03
C
==
12
010,
03
C==
13
03
0,
00
C
==
21
24 6,
03
C==
22
14 3,
03
C==
23
12 0,
00
C==
3.3 Solutions 185
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
31
2414,
31
C==
32
14 1,
01
C==
33
12 3,
03
C==
9614
adj 0 3 1
003
A
−−
ªº
«»
=
«»
«»
¬¼
and
1
12/3 14/9
1adj 0 1/ 3 1/ 9 .
det 001/3
AA
A
ª
º
«
»
==
«
»
«
»
¬
¼
17. Let
ab
A
cd
ªº
=
«»
¬¼
. Then the cofactors of A are
11
,Cdd==
12
,Ccc==
21
Cbb==
, and
22
Caa==
. Thus
adj db
A
ca
ªº
=
«»
¬¼
. Since det A = ad bc, Theorem 8 gives that
1
11
adj
det
db
AA
ca
Aadbc
ªº
==
«»
¬¼
. This result is identical to that of Theorem 4 in Section 2.2.
18. Each cofactor of A is an integer since it is a sum of products of entries in A. Hence all entries in adj A
will be integers. Since det A = 1, the inverse formula in Theorem 8 shows that all the entries in
1
A
will be integers.
19. The parallelogram is determined by the columns of
56
24
A
ª
º
=
«
»
¬
¼
, so the area of the parallelogram is
|det A| = |8| = 8.
20. The parallelogram is determined by the columns of
14
35
A
ª
º
=
«
»
¬
¼
, so the area of the parallelogram
is |det A| = |–7| = 7.
21. First translate one vertex to the origin. For example, subtract (–1, 0) from each vertex to get a new
parallelogram with vertices (0, 0),(1, 5),(2, –4), and (3, 1). This parallelogram has the same area as
the original, and is determined by the columns of
12
54
A
ª
º
=
«
»
¬
¼
, so the area of the parallelogram is
|det A| = |–14| = 14.
22. First translate one vertex to the origin. For example, subtract (0, –2) from each vertex to get a new
parallelogram with vertices (0, 0),(6, 1),(–3, 3), and (3, 4). This parallelogram has the same area as
the original, and is determined by the columns of
63
13
A
ª
º
=
«
»
¬
¼
, so the area of the parallelogram is
|det A| = |21| = 21.
23. The parallelepiped is determined by the columns of
117
021
240
A
ª
º
«
»
=
«
»
«
»
¬
¼
, so the volume of the
parallelepiped is |det A| = |22| = 22.
186 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
24. The parallelepiped is determined by the columns of
121
452
021
A
−−
ª
º
«
»
=
«
»
«
»
¬
¼
, so the volume of the
parallelepiped is |det A| = |–15| = 15.
25. The Invertible Matrix Theorem says that a 3 × 3 matrix A is not invertible if and only if its columns
are linearly dependent. This will happen if and only if one of the columns is a linear combination of
the others; that is, if one of the vectors is in the plane spanned by the other two vectors. This is
equivalent to the condition that the parallelepiped determined by the three vectors has zero volume,
which is in turn equivalent to the condition that det A = 0.
26. By definition, p + S is the set of all vectors of the form p + v, where v is in S. Applying T to a typical
vector in p + S, we have T(p + v) = T(p) + T(v). This vector is in the set denoted by T(p) + T(S). This
proves that T maps the set p + S into the set T(p) + T(S). Conversely, any vector in T(p) + T(S) has
the form T(p) + T(v) for some v in S. This vector may be written as T(p + v). This shows that every
vector in T(p) + T(S) is the image under T of some point p + v in p + S.
27. Since the parallelogram S is determined by the columns of
22
35
−−
ª
º
«
»
¬
¼
, the area of S is
22
det | 4 | 4.
35
−−
ªº
==
«»
¬¼
The matrix A has
62
det 6
32
A
==
. By Theorem 10, the area of T(S)
is |det A|{area of S} = 6 4 = 24. Alternatively, one may compute the vectors that determine the
image, namely, the columns of
[]
12
6222 1822
3235 1216
A
−− −
ªºªºª º
==
«»«»« »
¬¼¬¼¬ ¼
bb
The determinant of this matrix is –24, so the area of the image is 24.
28. Since the parallelogram S is determined by the columns of
40
71
ª
º
«
»
¬
¼
, the area of S is
40
det | 4 | 4
71
ªº
==
«»
¬¼
. The matrix A has
72
det 5
11
A==
. By Theorem 10, the area of T(S) is
|det A|{area of S} =5 4 = 20. Alternatively, one may compute the vectors that determine the image,
namely, the columns of
[]
12
72 40 142
11 71 31
A
ªºª ºª º
==
«»« »« »
−−
¬¼¬ ¼¬ ¼
bb
The determinant of this matrix is 20, so the area of the image is 20.
29. The area of the triangle will be one half of the area of the parallelogram determined by
1
v
and
2
.v
By Theorem 9, the area of the triangle will be (1/2)|det A|, where
[]
12
.A=
vv
30. Translate R to a new triangle of equal area by subtracting
33
(, )xy
from each vertex. The new triangle
has vertices (0, 0),
131 3
(, )xxyy−−
, and
2323
(, ).xxyy−−
By Exercise 29, the area of the triangle
will be
3.3 Solutions 187
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
13 23
13 23
1det .
2
xx x x
yy yy
−−
ªº
«»
−−
¬¼
Now consider using row operations and a cofactor expansion to compute the determinant in the
formula:
11 13 13
13 13
22 23 23
23 23
33 33
10
det 1 det 0det
11
xy xxyy xx yy
xy xxyy xx yy
xy x y
−−
ªºª º
−−
ª
º
«»« »
=−−=
«
»
«»« »
−−
¬
¼
«»« »
¬¼¬ ¼
By Theorem 5,
13 13 13 23
23 23 13 23
det det
xx yy xx x x
xx yy yy yy
−− −
ªºªº
=
«»«»
−− −
¬¼¬¼
So the above observation allows us to state that the area of the triangle will be
11
13 23
22
13 23
33
1
11
det det 1
22
1
xy
xx x x xy
yy y y xy
ª
º
−−
ªº
«
»
=
«»
«
»
−−
¬¼
«
»
¬
¼
31. a. To show that T(S) is bounded by the ellipsoid with equation
2
22
312
222
1
xxx
abc
++=
, let
1
2
3
u
u
u
ªº
«»
=«»
«»
¬¼
u
and
let
1
2
3
x
xA
x
ªº
«»
==
«»
«»
¬¼
xu
. Then
11
/
uxa=
,
22
/
uxb=
, and
33
/
uxc=
, and u lies inside S (or
222
123
1uuu++
) if and only if x lies inside T(S) (or
2
22
312
222
1
xxx
abc
++
).
b. By the generalization of Theorem 10,
{volume of ellipsoid} {volume of ( )}TS=
44
|det | {volumeof } 33
abc
ASabc
ππ
===
32. a. A linear transformation T that maps S onto Swill map
1
e
to
1
,v
2
e
to
2
,v
and
3
e
to
3
;v
that is,
11
()T=ev
,
22
()T=ev
, and
33
() .T=ev
The standard matrix for this transformation will be
[][]
123123
() ( ) () .AT T T==
ee evvv
188 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b. The area of the base of S is (1/2)(1)(1) = 1/2, so the volume of S is (1/3)(1/2)(1) = 1/6. By part a.
T(S) = S, so the generalization of Theorem 10 gives that the volume of S is |det A|{volume of
S} = (1/6)|det A|.
33. [M] Answers will vary. In MATLAB, entries in B – inv(A) are approximately
15
10
or smaller.
34. [M] Answers will vary, as will the commands which produce the second entry of x. For example, the
MATLAB command is x2 = det([A(:,1) b A(:,3:4)])/det(A) while the Mathematica
command is x2 = Det[{Transpose[A][[1]],b,Transpose[A][[3]],
Transpose[A][[4]]}]/Det[A].
35. [M] MATLAB Student Version 4.0 uses 57,771 flops for inv A and 14,269,045 flops for the inverse
formula. The inv(A) command requires only about 0.4% of the operations for the inverse formula.
Chapter 3 SUPPLEMENTARY EXERCISES
1. a. True. The columns of A are linearly dependent.
b . True. See Exercise 30 in Section 3.2.
c. False. See Theorem 3(c); in this case
3
det 5 5 detAA=
.
d . False. Consider
20
01
A
ªº
=
«»
¬¼
,
10
03
B
ª
º
=
«
»
¬
¼
, and
30
04
AB
ª
º
+=
«
»
¬
¼
.
e . False. By Theorem 6,
33
det 2A=
.
f. False. See Theorem 3(b).
g . True. See Theorem 3(c).
h . True. See Theorem 3(a).
i. False. See Theorem 5.
j. False. See Theorem 3(c); this statement is false for n × n invertible matrices with n an even
integer.
k . True. See Theorems 6 and 5;
2
det (det )
T
AA A=
.
l. False. The coefficient matrix must be invertible.
m. False. The area of the triangle is 5.
n. True. See Theorem 6;
33
det (det )AA=
.
o. False. See Exercise 31 in Section 3.2.
p. True. See Theorem 6.
2.
12 13 14 12 13 14
15 16 17 3 3 3 0
18 19 20 6 6 6
==
3.
11 1
10 ()()0110
10 011
abc a bc abc
bac baab baca
cab caac
++ +
+= −−=−− −=
+−−
Chapter 3 Supplementary Exercises 189
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
4.
111 0
111
abcabc abc
ax bx cx x x x xy
aybycy y y y
+++= = =
+++
5.
91999 9992 405
90992 4050
(1) (1)(2)9 3 9
40050 9390 607
90390 6070
60070
==−−
45
(1)(2)(3) (1)(2)(3)(2) 12
67
=−− =−− −=
6.
48885 4885 485
01000 6887 45
(1) (1)(2) 6 8 7 (1)( 2)( 3) (1)(2)( 3)( 2) 12
68887 0830 67
030
08830 0200
08200
====−−=
7. Expand along the first row to obtain
11 1 1
11
22 2 2
22
111
11 0.
11
1
xy xy y x
xy x y
xy y x
xy
=+=
This is an equation of the form ax + by + c = 0, and since the points
11
(, )xy
and
22
(, )xy
are distinct,
at least one of a and b is not zero. Thus the equation is the equation of a line. The points
11
(, )xy
and
22
(, )xy
are on the line, because when the coordinates of one of the points are substituted for x and y,
two rows of the matrix are equal and so the determinant is zero.
8. Expand along the first row to obtain
11 1 1
11 11
111
11 1()()(1)0.
1001
01
xy xy y x
xy x y mxyxmy
mm
m
=+=−− +=
This equation may
be rewritten as
11
0,mx y mx y−− +=
or
11
().yy mxx=
9.
22 2
222
222
11 1
det 1 0 0 ( )( )
0()()
10
aa a a aa
Tbb baba bababa
ca caca
cc cac a
==−−=−−+
−−+
−−
22
11
()()01 ()()01 ()()()
01 00
aa aa
baca ba baca ba bacacb
ca cb
=−− +=−− +=−−−
+
190 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
10. Expanding along the first row will show that
23
01 2 3
() det .ft V c ct ct ct==+++
By Exercise 9,
2
11
2
322213132
2
33
1
1()()()0
1
xx
cxxxxxxxx
xx
==−−−
since
1
x
,
2
x
, and
3
x
are distinct. Thus f (t) is a cubic polynomial. The points
1
(,0)x
,
2
(,0)x
, and
3
(,0)x
are on the graph of f, since when any of
1
x
,
2
x
or
3
x
are substituted for t, the matrix has two
equal rows and thus its determinant (which is f (t)) is zero. Thus
() 0
i
fx =
for i = 1, 2, 3.
11. To tell if a quadrilateral determined by four points is a parallelogram, first translate one of the
vertices to the origin. If we label the vertices of this new quadrilateral as 0,
1
v
,
2
v
, and
3
v
, then
they will be the vertices of a parallelogram if one of
1
v
,
2
v
, or
3
v
is the sum of the other two. In
this example, subtract (1, 4) from each vertex to get a new parallelogram with vertices 0 = (0, 0),
1
(2,1)=v
,
2
(2,5)=v
, and
3
(4,4)=v
. Since
231
=+vvv
, the quadrilateral is a parallelogram as
stated. The translated parallelogram has the same area as the original, and is determined by the
columns of
[]
13
24
14
A
ªº
==
«»
¬¼
vv
, so the area of the parallelogram is |det A| = |–12| = 12.
12. A 2 × 2 matrix A is invertible if and only if the parallelogram determined by the columns of A has
nonzero area.
13. By Theorem 8,
1
1
(adj ) det
AAAAI
A
==
. By the Invertible Matrix Theorem, adj A is invertible
and
1
1
(adj ) det
AA
A
=
.
14. a. Consider the matrix
k
k
AO
AOI
ªº
=
«»
¬¼
, where 1 k n and O is an appropriately sized zero matrix.
We will show that
det det
k
AA=
for all 1 k n by mathematical induction.
First let k = 1. Expand along the last row to obtain
(1)(1)
1
det det ( 1) 1 det det .
1
nn
AO
AAA
O
++ +
ªº
==−⋅=
«»
¬¼
Now let 1 < k n and assume that
1
det det .
k
AA
=
Expand along the last row of
k
A
to obtain
()()
11
det det ( 1) 1 det det det .
nk nk
kkk
k
AO
AAAA
OI
+++
−−
ªº
==−⋅==
«»
¬¼
Thus we have proven the
result, and the determinant of the matrix in question is det A.
b. Consider the matrix
k
k
k
IO
ACD
ªº
=
«»
¬¼
, where 1 k n,
k
C
is an n × k matrix and O is an
appropriately sized zero matrix. We will show that
det det
k
AD=
for all 1 k n by
mathematical induction.
Chapter 3 Supplementary Exercises 191
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
First let k = 1. Expand along the first row to obtain
11
1
1
1
det det ( 1) 1 det det .
O
ADD
CD
+
ªº
==−⋅=
«»
¬¼
Now let 1 < k n and assume that
1
det det .
k
AD
=
Expand along the first row of
k
A
to obtain
11
11
det det (1) 1det det det .
k
kkk
k
IO
AAAD
CD
+
−−
ªº
==−⋅==
«»
¬¼
Thus we have proven the result, and
the determinant of the matrix in question is det D.
c. By combining parts a. and b., we have shown that
det det det (det )(det ).
AO AO I O
AD
CD O I CD
§·§·
ªº ªºªº
==
¨¸¨¸
«» «»«»
¬¼ ¬¼¬¼
©¹©¹
From this result and Theorem 5, we have
det det det (det )(det )
TT
TT
TT
AB AB A O
AD
OD OD BD
ªº
ªºªº
== =
«»
«»«»
¬¼¬¼ «»
¬¼
(det )(det ).AD=
15. a. Compute the right side of the equation:
IOAB A B
XIOY XAXBY
ªºªºª º
=
«»«»« »
+
¬¼¬¼¬ ¼
Set this equal to the left side of the equation:
so that
AB A B
XA C XB Y D
CD XAXBY
ªºª º
==+ =
«»« »
+
¬¼¬ ¼
Since XA = C and A is invertible,
1
.XCA
=
Since XB + Y = D,
1
YDXBDCAB
==
. Thus
by Exercise 14(c),
11
det det det
IO A B
AB
CD CA I O D CA B
−−
ªºª º
ªº
=
«»« »
«»
¬¼
¬¼¬ ¼
1
(det )(det ( ))ADCAB
=
b. From part a.,
11
det (det )(det ( )) det[ ( )]
AB
ADCAB ADCAB
CD
−−
ªº
==
«»
¬¼
11
det[ ]det[ ]AD ACA B AD CAA B
−−
==
det[ ]AD CB=
16. a. Doing the given operations does not change the determinant of A since the given operations are
all row replacement operations. The resulting matrix is
00
00
00 0
ab ab
ab ab
ab
bb b a
−−+…
ªº
«»
−−+…
«»
«»
«»
«»
«»
¬¼
## #%#
192 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b. Since column replacement operations are equivalent to row operations on
T
A
and
det det
T
AA=
,
the given operations do not change the determinant of the matrix. The resulting matrix is
00 0
00 0
00 0
23 (1)
ab
ab
ab
bbb anb
ªº
«»
«»
«»
«»
«»
«»
…+
¬¼
###% #
c. Since the preceding matrix is a triangular matrix with the same determinant as A,
1
det ( ) ( ( 1) ).
n
Aab an b
=+
17. First consider the case n = 2. In this case
2
det (),det ,
0
ab b bb
BaabC ab b
aba
====
so
222 21
det det det ( ) ()()()((21))ABCaababbababababa b
=+=+==+=+
, and
the formula holds for n = 2.
Now assume that the formula holds for all (k – 1) × (k 1) matrices, and let A, B, and C be k × k
matrices. By a cofactor expansion along the first column,
21
det ( ) ()()((2))()((2))
kk
ab b
ba b
Bab abab ak b ab ak b
bb a
−−
==−− +=+
##%#
since the matrix in the above formula is a (k – 1) × (k – 1) matrix. We can perform a series of row
operations on C to “zero out” below the first pivot, and produce the following matrix whose
determinant is det C:
00
.
00
bb b
ab
ab
ªº
«»
«»
«»
«»
«»
¬¼
##%#
Since this is a triangular matrix, we have found that
1
det ( )
k
Cbab
=
. Thus
111
det det det ( ) ( ( 2) ) ( ) ( ) ( ( 1) ),
kkk
ABCabakbbab abakb
−−
=+=++=+
which is what was to be shown. Thus the formula has been proven by mathematical induction.
18. [M] Since the first matrix has a = 3, b = 8, and n = 4, its determinant is
41 3
(3 8) (3 (4 1)8) ( 5) (3 24) ( 125)(27) 3375.
+=+==
Since the second matrix has a = 8, b =
3, and n = 5, its determinant is
51 4
(8 3) (8 (5 1)3) (5) (8 12) (625)(20) 12,500.
+=+= =
Chapter 3 Supplementary Exercises 193
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
19. [M] We find that
11111
1111
111 12222
1222
122 1, 1, 1.
12333
1233
123 12344
1234 12345
== =
Our conjecture then is that
111 1
122 2
1.
123 3
123 n
=
## #%#
To show this, consider using row replacement operations to “zero out” below the first pivot. The
resulting matrix is
111 1
011 1
.
012 2
012 1n
ªº
«»
«»
«»
«»
«»
«»
¬¼
###% #
Now use row replacement operations to “zero out” below the second pivot, and so on. The final
matrix which results from this process is
111 1
011 1
,
001 1
000 1
ªº
«»
«»
«»
«»
«»
«»
¬¼
###%#
which is an upper triangular matrix with determinant 1.
20. [M] We find that
1111 1
111 1
111 1333 3
1333
133 6, 18, 54.
1366 6
1366
136 136 9 9
1369 136 912
== =
Our conjecture then is that
194 CHAPTER 3 Determinants
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
2
111 1
133 3
23 .
136 6
136 3( 1)
n
n
=
###% #
To show this, consider using row replacement operations to “zero out” below the first pivot. The
resulting matrix is
111 1
022 2
.
025 5
025 3( 1)1n
ªº
«»
«»
«»
«»
«»
«»
−−
¬¼
###% #
Now use row replacement operations to “zero out” below the second pivot. The matrix which results
from this process is
11111 1 1
02222 2 2
00333 3 3
.
00366 6 6
00369 9 9
0036912 3( 2)n
ªº
«»
«»
«»
«»
«»
«»
«»
«»
«»
¬¼
##### #% #
This matrix has the same determinant as the original matrix, and is recognizable as a block matrix of
the form
,
AB
OD
ªº
«»
¬¼
where
333 3 3 1111 1
366 6 6 1222 2
11
and 3 .
369 9 9 1233 3
02
36912 3( 2) 1234 2
AD
nn
……
ª
ºª º
«
»« »
……
«
»« »
ªº
«
»« »
== =
……
«»
«
»« »
¬¼
«
»« »
«
»« »
− −
¬
¼¬ ¼
### #% # ####% #
!
As in Exercise 14(c), the determinant of the matrix
AB
OD
ª
º
«
»
¬
¼
is (det A)(det D) = 2 det D.
Since D is an (n – 2) × (n – 2) matrix,
Chapter 3 Supplementary Exercises 195
Copyright ©!2012 Pearson Education, Inc. Publishing as Addison-Wesley.
222
111 1 1
1222 2
det 3 3 (1) 3
1233 3
1234 2
nnn
D
n
−−
===
####% #
by Exercise 19. Thus the determinant of the matrix
AB
OD
ª
º
«
»
¬
¼
is
2
2det 2 3 .
n
D
=
Copyright © 2012 Pea
r
4.1 SOLUTIONS
Notes:
This section is designed to avoi
axioms on an array of sets. Theorem 1
p
set is a subspace. Students should be ta
u
(and the next few sections) emphasize
vectors do appear later in the chapter: t
h
polynomials are used in many sections
o
1. a. If u and v are in V, then their en
t
nonnegative, the vector u + v h
a
b. Example: If
2
2
ªº
=«»
¬¼
u
and c = –1
2. a. If
x
y
ªº
=«»
¬¼
u
is in W, then the vec
t
since xy 0.
b. Example: If
1
7
ªº
=«»
¬¼
u
and
ª
=«
¬
v
3. Example: If
.5
.5
ªº
=«»
¬¼
u
and c = 4, the
n
multiplication, H is not a subspace
o
4. Note that u and v are on the line L,
u
v
L
u+v
r
son Education, Inc. Publishing as Addison-Wesley.
d the standard exercises in which a student is asked
p
rovides the main homework tool in this section for s
h
u
ght how to check the closure axioms. The exercises i
n
n
, to give students time to absorb the abstract co
n
h
e space of signals is used in Section 4.8, and the
o
f Chapters 4 and 6.
t
ries are nonnegative. Since a sum of nonnegative nu
m
a
s nonnegative entries. Thus u + v is in V.
, then u is in V but cu is not in V.
t
or
xcx
cc
ycy
ªº ª º
==
«» « »
¬¼ ¬ ¼
u
is in W because
2
()() (cx cy c x
y
=
2
3
ª
º
»
¬
¼
, then u and v are in W but u + v is not in W.
n
u is in H but cu is not in H. Since H is not closed u
n
o
f
2
.
but u + v is not.
197
to check ten
h
owing that a
n
this section
n
cepts. Other
spaces
n
of
m
bers is
)0
y
n
der scalar
198 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
5. Yes. Since the set is
2
Span{ }t
, the set is a subspace by Theorem 1.
6. No. The zero vector is not in the set.
7. No. The set is not closed under multiplication by scalars which are not integers.
8. Yes. The zero vector is in the set H. If p and q are in H, then (p + q)(0) = p(0) + q(0) = 0 + 0 = 0,
so p + q is in H. For any scalar c, (cp)(0) = c p(0) = c 0 = 0, so cp is in H. Thus H is a subspace by
Theorem 1.
9. The set H = Span
{v}, where
2
5
3
ªº
«»
=«»
«»
¬¼
v
. Thus H is a subspace of
3
by Theorem 1.
10. The set H = Span
{v}, where
3
0
7
ªº
«»
=«»
«»
¬¼
v
. Thus H is a subspace of
3
by Theorem 1.
11. The set W = Span
{u, v}, where
2
1
0
ªº
«»
=
«»
«»
¬¼
u
and
3
0
2
ª
º
«
»
=
«
»
«
»
¬
¼
v
. Thus W is a subspace of
3
by Theorem 1.
12. The set W = Span
{u, v}, where
2
2
2
0
ªº
«»
«»
=«»
«»
¬¼
u
and
4
0
3
5
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
v
. Thus W is a subspace of
4
by Theorem 1.
13. a. The vector w is not in the set
123
{, , }vv v
. There are 3 vectors in the set
123
{, , }.vv v
b. The set
123
Span{ , , }vv v
contains infinitely many vectors.
c. The vector w is in the subspace spanned by
123
{, , }vv v
if and only if the equation
11 2 2 3 3
xx x++=vv vw
has a solution. Row reducing the augmented matrix for this system of
linear equations gives
1243 100 1
0121 0121,
1362 0000
ªºªº
«»«»
«»«»
«»«»
¬¼¬¼
so the equation has a solution and w is in the subspace spanned by
123
{, , }vv v
.
14. The augmented matrix is found as in Exercise 13c. Since
1241 100 5
0123 012 3,
13614 000 0
ªºªº
«»«»
«»«»
«»«»
¬¼¬¼
4.1 Solutions 199
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
the equation
11 2 2 3 3
xx x++=vvvw
has a solution, the vector w is in the subspace spanned by
123
{, , }.vv v
15. Since the zero vector is not in W, W is not a vector space.
16. Since the zero vector is not in W, W is not a vector space.
17. Since a vector w in W may be written as
210
031
103
030
abc
ªº ªº ªº
«» «» «»
«» «» «»
=++
«» «» «»
«» «» «»
¬¼ ¬¼ ¬¼
w
210
031
,,
103
030
S
½
ªºªºªº
°°
«»«»«»
°°
«»«»«»
=®¾
«»«»«»
°°
«»«»«»
°°
¬¼¬¼¬¼
¯¿
is a set that spans W.
18. Since a vector w in W may be written as
43 0
00 0
13 1
03 2
abc
ªº ªº ª º
«» «» « »
«» «» « »
=++
«» «» « »
«» «» « »
¬¼ ¬¼ ¬ ¼
w
43 0
00 0
,,
13 1
03 2
S
½
ªºªºª º
°°
«»«»« »
°°
«»«»« »
=®¾
«»«»« »
°°
«»«»« »
°°
¬¼¬¼¬ ¼
¯¿
is a set that spans W.
19. Let H be the set of all functions described by
12
() cos sin .yt c t c t
ωω
=+
Then H is a subset of the
vector space V of all real-valued functions, and may be written as H = Span
{cos
ω
t, sin
ω
t}. By
Theorem 1, H is a subspace of V and is hence a vector space.
20. a. The following facts about continuous functions must be shown.
1. The constant function f(t) = 0 is continuous.
2. The sum of two continuous functions is continuous.
3. A constant multiple of a continuous function is continuous.
b. Let H = {f in C[a, b]: f(a) = f(b)}.
1. Let g(t) = 0 for all t in [a, b]. Then g(a) = g(b) = 0, so g is in H.
2. Let g and h be in H. Then g(a) = g(b) and h(a) = h(b), and (g + h)(a) = g(a) + h(a) =
g(b) + h(b) = (g + h)(b), so g + h is in H.
3. Let g be in H. Then g(a) = g(b), and (cg)(a) = cg(a) = cg(b) = (cg)(b), so cg is in H.
200 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Thus H is a subspace of C[a, b].
21. The set H is a subspace of
22
.M×
The zero matrix is in H, the sum of two upper triangular matrices is
upper triangular, and a scalar multiple of an upper triangular matrix is upper triangular.
22. The set H is a subspace of
24
.M×
The 2 × 4 zero matrix 0 is in H because F0 = 0. If A and B are
matrices in H, then F(A + B) = FA + FB = 0 + 0 = 0, so A + B is in H. If A is in H and c is a scalar,
then F(cA) = c(FA) = c0 = 0, so cA is in H.
23. a. False. The zero vector in V is the function f whose values f(t) are zero for all t in .
b . False. An arrow in three-dimensional space is an example of a vector, but not every arrow is a
vector.
c . False. See Exercises 1, 2, and 3 for examples of subsets which contain the zero vector but are not
subspaces.
d . True. See the paragraph before Example 6.
e . False. Digital signals are used. See Example 3.
24. a. True. See the definition of a vector space.
b . True. See statement (3) in the box before Example 1.
c . True. See the paragraph before Example 6.
d . False. See Example 8.
e . False. The second and third parts of the conditions are stated incorrectly. For example, part (ii)
does not state that u and v represent all possible elements of H.
25. 2, 4
26. a. 3
b . 5
c . 4
27. a. 8
b . 3
c . 5
d . 4
28. a. 4
b . 7
c . 3
d . 5
e . 4
29. Consider u + (–1)u. By Axiom 10, u + (–1)u = 1u + (–1)u. By Axiom 8, 1u + (–1)u = (1 + (–1))u =
0u. By Exercise 27, 0u = 0. Thus u + (–1)u = 0, and by Exercise 26 (–1)u = –u.
30. By Axiom 10 u = 1u. Since c is nonzero,
11cc
=
, and
1
()cc
=uu
. By Axiom 9,
11 1
() ()cc c c c
−− −
==uu0
since cu = 0. Thus
1
c
==u00
by Property (2), proven in Exercise 28.
4.1 Solutions 201
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
31. Any subspace H that contains u and v must also contain all scalar multiples of u and v, and hence
must also contain all sums of scalar multiples of u and v. Thus H must contain all linear
combinations of u and v, or Span
{u, v}.
Note:
Exercises 32–34 provide good practice for mathematics majors because these arguments involve
simple symbol manipulation typical of mathematical proofs. Most students outside mathematics might
profit more from other types of exercises.
32. Both H and K contain the zero vector of V because they are subspaces of V. Thus the zero vector of V
is in H K. Let u and v be in H K. Then u and v are in H. Since H is a subspace u + v is in H.
Likewise u and v are in K. Since K is a subspace u + v is in K. Thus u + v is in H K. Let u be in H
K. Then u is in H. Since H is a subspace cu is in H. Likewise u is in K. Since K is a subspace cu is
in K. Thus cu is in H K for any scalar c, and H K is a subspace of V.
The union of two subspaces is not in general a subspace. For an example in
2
let H be the x-axis and
let K be the y-axis. Then both H and K are subspaces of
2
, but H K is not closed under vector
addition. The subset H K is thus not a subspace of
2
.
33. a. Given subspaces H and K of a vector space V, the zero vector of V belongs to H + K, because 0 is
in both H and K (since they are subspaces) and 0 = 0 + 0. Next, take two vectors in H + K, say
111
=+wuv
and
222
=+wuv
where
1
u
and
2
u
are in H, and
1
v
and
2
v
are in K. Then
121122 12 12
()()+=+++=+ ++ww uvu v uu vv
because vector addition in V is commutative and associative. Now
12
+uu
is in H and
12
+vv
is
in K because H and K are subspaces. This shows that
12
+ww
is in H + K. Thus H + K is closed
under addition of vectors. Finally, for any scalar c,
11111
()cc cc=+=+wuvuv
The vector
1
cu
belongs to H and
1
cv
belongs to K, because H and K are subspaces. Thus,
1
cw
belongs to H + K, so H + K is closed under multiplication by scalars. These arguments show that
H + K satisfies all three conditions necessary to be a subspace of V.
b. Certainly H is a subset of H + K because every vector u in H may be written as u + 0, where the
zero vector 0 is in K (and also in H, of course). Since H contains the zero vector of H + K, and H
is closed under vector addition and multiplication by scalars (because H is a subspace of V ), H is
a subspace of H + K. The same argument applies when H is replaced by K, so K is also a
subspace of H + K.
34. A proof that
11
Span{ , , , , , }
pq
HK+= … …uuvv
has two parts. First, one must show that H + K is a
subset of
11
Span{ , , , , , }.
pq
……uuvv
Second, one must show that
11
Span{ , , , , , }
pq
……uuvv
is a
subset of H + K.
(1) A typical vector in H has the form
11 pp
cc+…+uu
and a typical vector in K has the form
11
.
qq
dd+…+vv
The sum of these two vectors is a linear combination of
11
,, ,,,
pq
……uuvv
and so belongs to
11
Span{ , , , , , }.
pq
……uuvv
Thus H + K is a subset of
11
Span{ , , , , , }.
pq
……uuvv
(2) Each of the vectors
11
,, ,,,
pq
……uuvv
belongs to H + K, by Exercise 33(b), and so any linear
combination of these vectors belongs to H + K, since H + K is a subspace, by Exercise 33(a).
Thus,
11
Span{ , , , , , }
pq
……uuvv
is a subset of H + K.
202 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
35. [M] Since
84 79 1001
43 64 0102
,
32 54 0011
98187 0000
−−
ªºªº
«»«»
−− −
«»«»
«»«»
−− − −
«»«»
−−
¬¼¬¼
w is in the subspace spanned by
123
{, , }.vv v
36. [M] Since
[]
3594 1001/5
8768 0102/5
,
5836 0013/5
2295 000 0
A
−−− −
ªºªº
«»«»
−− −
«»«»
=
«»«»
−−
«»«»
−−−
¬¼¬¼
y
y is in the subspace spanned by the columns of A.
37. [M] The graph of f(t) is given below. A conjecture is that f(t) = cos 4t.
The graph of g(t) is given below. A conjecture is that g(t) = cos 6t.
1 2 3 4 5 6
–1
0.5
0.5
1
1 2 3 4 5 6
–1
0.5
0.5
1
4.2 Solutions 203
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
38. [M] The graph of f(t) is given below. A conjecture is that f(t) = sin 3t.
The graph of g(t) is given below. A conjecture is that g(t) = cos 4t.
The graph of h(t) is given below. A conjecture is that h(t) = sin 5t.
4.2 SOLUTIONS
Notes:
This section provides a review of Chapter 1 using the new terminology. Linear tranformations are
introduced quickly since students are already comfortable with the idea from
n
. The key exercises are
17–26, which are straightforward but help to solidify the notions of null spaces and column spaces.
Exercises 30–36 deal with the kernel and range of a linear transformation and are progressively more
advanced theoretically. The idea in Exercises 7–14 is for the student to use Theorems 1, 2, or 3 to
determine whether a given set is a subspace.
1. One calculates that
3531 0
6203 0,
8414 0
A
−−
ªºªºªº
«»«»«»
==
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
w
so w is in Nul A.
2. One calculates that
2641 0
3251 0,
5411 0
A
ªºªºªº
«»«»«»
=−−=
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
w
1 2 3 4 5 6
–1
0.5
0.5
1
1 2 3 4 56
–1
0.5
0.5
1
1 2 3 4 56
–1
0.5
0.5
1
204 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
so w is in Nul A.
3. First find the general solution of Ax = 0 in terms of the free variables. Since
[]
10 2 40
,
01 3 20
A
ªº
«»
¬¼
0
the general solution is
134
24xxx=
,
234
32xxx=+
, with
3
x
and
4
x
free. So
1
2
34
3
4
24
32
,
10
01
x
x
xx
x
x
ªº ªº ªº
«» «» «»
«» «» «»
== +
«» «» «»
«» «» «»
¬¼ ¬¼
¬¼
x
and a spanning set for Nul A is
24
32
,.
10
01
½
ªºªº
°°
«»«»
°°
«»«»
®¾
«»«»
°°
«»«»
°°
¬¼¬¼
¯¿
4. First find the general solution of Ax = 0 in terms of the free variables. Since
[]
13000
,
00100
A
ªº
«»
¬¼
0
the general solution is
12
3xx=
,
30x=
, with
2
x
and
4
x
free. So
1
2
24
3
4
30
10
,
00
01
x
x
xx
x
x
ªº ªº ªº
«» «» «»
«» «» «»
== +
«» «» «»
«» «» «»
¬¼ ¬¼
¬¼
x
and a spanning set for Nul A is
30
10
,.
00
01
½
ªºªº
°°
«»«»
°°
«»«»
®¾
«»«»
°°
«»«»
°°
¬¼¬¼
¯¿
5. First find the general solution of Ax = 0 in terms of the free variables. Since
[]
140200
001500,
000010
A
ªº
«»
∼−
«»
«»
¬¼
0
the general solution is
124
42xxx=
,
34
5xx=
,
50x=
, with
2
x
and
4
x
free. So
4.2 Solutions 205
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1
2
32 4
4
5
42
10
,
05
01
00
x
x
xx x
x
x
ªº ªº ª º
«» «» « »
«» «» « »
«» «» « »
== +
«» «» « »
«» «» « »
«» «» « »
¬¼ ¬ ¼
¬¼
x
and a spanning set for Nul A is
42
10
,.
05
01
00
½
ªºª º
°°
«»« »
°°
«»« »
°°
«»« »
®¾
«»« »
°°
«»« »
°°
«»« »
°°
¬¼¬ ¼
¯¿
6. First find the general solution of Ax = 0 in terms of the free variables. Since
[]
10 5 6 10
01 3 100,
00 0 000
A
ªº
«»
∼−
«»
«»
¬¼
0
the general solution is
1345
56xxxx=+
,
234
3xxx=
, with
3
x
,
4
x
, and
5
x
free. So
1
2
33 4 5
4
5
561
310
,
100
010
001
x
x
xx x x
x
x
−−
ªº ªº ªº ªº
«» «» «» «»
«» «» «» «»
«» «» «» «»
== + +
«» «» «» «»
«» «» «» «»
«» «» «» «»
¬¼ ¬¼ ¬¼
¬¼
x
and a spanning set for Nul A is
561
310
,, .
100
010
001
½
−−
ªºªºªº
°°
«»«»«»
°°
«»«»«»
°°
«»«»«»
®¾
«»«»«»
°°
«»«»«»
°°
«»«»«»
°°
¬¼¬¼¬¼
¯¿
7. The set W is a subset of
3
. If W were a vector space (under the standard operations in
3
), then it
would be a subspace of
3
. But W is not a subspace of
3
since the zero vector is not in W. Thus W is
not a vector space.
8. The set W is a subset of
3
. If W were a vector space (under the standard operations in
3
), then it
would be a subspace of
3
. But W is not a subspace of
3
since the zero vector is not in W. Thus W is
not a vector space.
9. The set W is the set of all solutions to the homogeneous system of equations p – 3q – 4s = 0,
2p s – 5r = 0. Thus W = Nul A, where
1340
2015
A
−−
ª
º
=
«
»
−−
¬
¼
. Thus W is a subspace of
4
by
Theorem 2, and is a vector space.
206 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
10. The set W is the set of all solutions to the homogeneous system of equations 3a + b c = 0,
a + b + 2c – 2d = 0. Thus W = Nul A, where
31 1 0
11 2 2
A
ª
º
=
«
»
¬
¼
. Thus W is a subspace of
4
by
Theorem 2, and is a vector space.
11. The set W is a subset of
4
. If W were a vector space (under the standard operations in
4
), then it
would be a subspace of
4
. But W is not a subspace of
4
since the zero vector is not in W. Thus W is
not a vector space.
12. The set W is a subset of
4
. If W were a vector space (under the standard operations in
4
), then it
would be a subspace of
4
. But W is not a subspace of
4
since the zero vector is not in W. Thus W is
not a vector space.
13. An element w on W may be written as
1616
0101
1010
c
cd
d
−−
ªº ª º ª º
ªº
«» « » « »
=+ = «»
«» « » « »
¬¼
«» « » « »
¬¼ ¬ ¼ ¬ ¼
w
where c and d are any real numbers. So W = Col A where
16
01
10
A
ª
º
«
»
=
«
»
«
»
¬
¼
. Thus W is a subspace of
3
by Theorem 3, and is a vector space.
14. An element w on W may be written as
1313
1212
5151
s
st
t
−−
ªº ªºª º
ªº
«» «»« »
=+=«»
«» «»« »
¬¼
«» «»« »
−−
¬¼ ¬¼¬ ¼
w
where a and b are any real numbers. So W = Col A where
13
12
51
A
ª
º
«
»
=
«
»
«
»
¬
¼
. Thus W is a subspace of
3
by Theorem 3, and is a vector space.
15. An element in this set may be written as
021021
112112
310310
211211
r
rs t s
t
ªº ª º ª º ª º
ª
º
«» « » « » « »
−−
«
»
«» « » « » « »
++=
«
»
«» « » « » « »
«
»
«» « » « » « »
¬
¼
−− −
¬¼ ¬ ¼ ¬ ¼ ¬ ¼
where r, s and t are any real numbers. So the set is Col A where
021
112
310
211
A
ª
º
«
»
«
»
=
«
»
«
»
−−
¬
¼
.
16. An element in this set may be written as
4.2 Solutions 207
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
110110
20 3203
13 3133
01 1011
b
bc d c
d
−−
ªº ª º ª º ª º
ª
º
«» « » « » « »
«
»
«» « » « » « »
++ =
«
»
«» « » « » « »
−−
«
»
«» « » « » « »
¬
¼
¬¼ ¬ ¼ ¬ ¼ ¬ ¼
where b, c and d are any real numbers. So the set is Col A where
110
203
133
011
A
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
.
17. The matrix A is a 4 × 2 matrix. Thus
(a) Nul A is a subspace of
2
, and
(b) Col A is a subspace of
4
.
18. The matrix A is a 4 × 3 matrix. Thus
(a) Nul A is a subspace of
3
, and
(b) Col A is a subspace of
4
.
19. The matrix A is a 2 × 5 matrix. Thus
(a) Nul A is a subspace of
5
, and
(b) Col A is a subspace of
2
.
20. The matrix A is a 1 × 5 matrix. Thus
(a) Nul A is a subspace of
5
, and
(b) Col A is a subspace of
1
= .
21. Either column of A is a nonzero vector in Col A. To find a nonzero vector in Nul A, find the general
solution of Ax = 0 in terms of the free variables. Since
[]
12/30
000
,
000
000
A
ªº
«»
«»
«»
«»
¬¼
0
the general solution is
12
(2 / 3)xx=
, with
2
x
free. Letting
2
x
be a
nonzero value (say
23x=
) gives the nonzero vector
1
2
2
3
x
x
ªºªº
==
«»«»
¬¼
¬¼
x
which is in Nul A.
22. Any column of A is a nonzero vector in Col A. To find a nonzero vector in Nul A, find the general
solution of Ax = 0 in terms of the free variables. Since
[]
10 10
0110
,
0000
0000
A
ªº
«»
«»
«»
«»
¬¼
0
208 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
the general solution is
13
xx=
,
23
xx=
, with
3
x
free. Letting
3
x
be a nonzero value (say
3
1x=
)
gives the nonzero vector
1
2
3
1
1
1
x
x
x
ªºªº
«»«»
==
«»«»
«»«»
¬¼¬¼
x
which is in Nul A.
23. Consider the system with augmented matrix
[]
Aw
. Since
[]
121
,
000
A
−−
ªº
«»
¬¼
w
the system is consistent and w is in Col A. Also, since
242 0
121 0
A
ªºªºªº
==
«»«»«»
¬¼¬¼¬¼
w
w is in Nul A.
24. Consider the system with augmented matrix
[]
Aw
. Since
[]
100 11
010 11
,
001 00
000 00
A
ªº
«»
«»
«»
«»
¬¼
w
the system is consistent and w is in Col A. Also, since
10 8 2 2 2 0
02222 0
11600 0
11022 0
A
−−−
ªºªºªº
«»«»«»
«»«»«»
==
«»«»«»
«»«»«»
¬¼¬¼¬¼
w
w is in Nul A.
25. a. True. See the definition before Example 1.
b . False. See Theorem 2.
c . True. See the remark just before Example 4.
d . False. The equation Ax = b must be consistent for every b. See #7 in the table on page 204.
e . True. See Figure 2.
f. True. See the remark after Theorem 3.
26. a. True. See Theorem 2.
b . True. See Theorem 3.
c . False. See the box after Theorem 3.
d . True. See the paragraph after the definition of a linear transformation.
e . True. See Figure 2.
f. True. See the paragraph before Example 8.
4.2 Solutions 209
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
27. Let A be the coefficient matrix of the given homogeneous system of equations. Since Ax = 0 for
3
2
1
ªº
«»
=«»
«»
¬¼
x
, x is in Nul
A. Since Nul
A is a subspace of
3
, it is closed under scalar multiplication. Thus
30
10 20
10
ªº
«»
=«»
«»
¬¼
x
is also in Nul
A, and
1
30x=
,
2
20x=
,
3
10x=
is also a solution to the system of
equations.
28. Let A be the coefficient matrix of the given systems of equations. Since the first system has a
solution, the constant vector
0
1
9
ªº
«»
=«»
«»
¬¼
b
is in Col
A. Since Col A is a subspace of
3
, it is closed under
scalar multiplication. Thus
0
55
45
ªº
«»
=«»
«»
¬¼
b
is also in Col A, and the second system of equations must thus
have a solution.
29. a. Since ,A=00
the zero vector is in Col A.
b . Since
(),
AA A AA
+= + +
xw xwxw
is in Col A.
c . Since
() (),
cA Ac cA
=
xxx
is in Col A.
30. Since
()
VW
T=00
, the zero vector
W
0
of W is in the range of T. Let T(x) and T(w) be typical
elements in the range of T. Then since
() ( ) ( ), () ( )
TT T TT
+=+ +
xwxwxw
is in the range of T and
the range of T is closed under vector addition. Let c be any scalar. Then since
() ( ), ()
cT T c cT
=
xxx
is in the range of T and the range of T is closed under scalar multiplication. Hence the range of T is a
subspace of W.
31. a. Let p and q be arbitary polynomials in
2
, and let c be any scalar. Then
()(0) (0)(0) (0) (0)
() ()()
()(1) (1)(1) (1) (1)
TTT
++
ªºª ºªºªº
+= = = + = +
«»« »«»«»
++
¬¼¬ ¼¬¼¬¼
pq p q p q
pq p q
pq p q p q
and
()(0) (0)
() ()
()(1) (1)
c
Tc c cT
c
ªºªº
===
«»«»
¬¼¬¼
pp
pp
pp
so T is a linear transformation.
b. Any quadratic polynomial q for which
(0) 0=
q
and
(1) 0=
q
will be in the kernel of T. The
polynomial q must then be a multiple of
() ( 1).
ttt
=
p
Given any vector
1
2
x
x
ªº
«»
¬¼
in
2
, the
polynomial
121
()xxxt=+ p
has
1
(0) x=p
and
2
(1) .x=p
Thus the range of T is all of
2
.
210 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
32. Any quadratic polynomial q for which
(0) 0=
q
will be in the kernel of T. The polynomial q must
then be
2
.
at bt
=+
q
Thus the polynomials
1
()tt=p
and
2
2
()
tt
=
p
span the kernel of T. If a vector
is in the range of T, it must be of the form
.
a
a
ª
º
«
»
¬
¼
If a vector is of this form, it is the image of the
polynomial
()
ta
=
p
in
2
. Thus the range of T is :real.
aa
a
½
ªº
°
°
®
¾
«»
°
°
¬¼
¯¿
33. a. For any A and B in
22
M×
and for any scalar c,
()()() ( )( )()()
TTTTT
TA B A B A B A B A B A A B B TA TB
+=+++ =++ + =+ ++ = +
and
()() ( ) ()
TT
TcA cA cA cT A
== =
so T is a linear transformation.
b. Let B be an element of
22
M×
with
,
T
BB=
and let
1
2
.
AB
=
Then
11 11 11
() ( )
22 22 22
TTT
TA A A B B B B B B B=+ = + = + = + =
c. Part b. showed that the range of T contains the set of all B in
22
M×
with
.
T
BB=
It must also be
shown that any B in the range of T has this property. Let B be in the range of T. Then B = T(A) for
some A in
22
.M×
Then
,
T
BAA=+
and
() ()
TTTTTTT T
BAA AA AAAAB
=+ = + = +=+ =
so B has the property that
.
T
BB=
d. Let
ab
Acd
ªº
=
«»
¬¼
be in the kernel of T. Then
() 0
T
TA A A
=+ =
, so
200
200
T
ab ac a cb
AA cd bd bc d
+
ªºªºª ºªº
+= + = =
«»«»« »«»
+
¬¼¬¼¬ ¼¬¼
Solving it is found that 0ad==
and cb=. Thus the kernel of T is 0:real.
0
bb
b
½
ªº
°
°
®
¾
«»
°
°
¬¼
¯¿
34. Let f and g be any elements in C[0, 1] and let c be any scalar. Then T(f) is the antiderivative F of f
with F(0) = 0 and T(g) is the antiderivative G of g with G(0) = 0. By the rules for antidifferentiation
+FG
will be an antiderivative of
,+
fg
and
()(0)(0)(0)000.+=+=+=
FG F G
Thus
()()().
TTT
+= +
fg f g
Likewise cF will be an antiderivative of cf, and
()(0) (0) 00.
ccc
===
FF
Thus
() (),
Tc cT
=
ff
and T is a linear transformation. To find the kernel of T, we must find all
functions f in C[0,1] with antiderivative equal to the zero function. The only function with this
property is the zero function 0, so the kernel of T is {0}.
35. Since U is a subspace of V,
V
0
is in U. Since T is linear,
() .
VW
T=00
So
W
0
is in T(U). Let T(x) and
T(y) be typical elements in T(U). Then x and y are in U, and since U is a subspace of V, +xy
is also
in U. Since T is linear,
() () ( ).
TTT
+=+
xyxy
So
() ()
TT
+
xy
is in T(U), and T(U) is closed under
vector addition. Let c be any scalar. Then since x is in U and U is a subspace of V, cx is in U. Since T
4.2 Solutions 211
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
is linear,
() ()
Tc cT
=
xx
and cT(x) is in T(U ). Thus T(U) is closed under scalar multiplication, and
T(U) is a subspace of W.
36. Since Z is a subspace of W,
W
0
is in Z. Since T is linear,
() .
VW
T=00
So
V
0
is in U. Let x and y be
typical elements in U. Then T(x) and T(y) are in Z, and since Z is a subspace of W,
() ()
TT
+
xy
is
also in Z. Since T is linear,
() () ( ).
TT T
+=+
xyxy
So
()
T
+
xy
is in Z, and +xy
is in U. Thus U is
closed under vector addition. Let c be any scalar. Then since x is in U, T(x) is in Z. Since Z is a
subspace of W, cT(x) is also in Z. Since T is linear,
() ( )
cT T c
=
xx
and T(cx) is in T(U). Thus cx is in
U and U is closed under scalar multiplication. Hence U is a subspace of V.
37. [M] Consider the system with augmented matrix
[]
.
Aw
Since
[]
100 1/95 1/95
010 39/19 20/19
,
001267/95 172/95
000 0 0
A
ªº
«»
«»
«»
«»
«»
¬¼
w
the system is consistent and w is in Col
A. Also, since
7641114
510210
911731 0
19 9 7 1 3 0
A
ªºªºªº
«»«»«»
−− −
«»«»«»
==
«»«»«»
−−
«»«»«»
−−
«»«»«»
¬¼¬¼¬¼
w
w is not in Nul
A.
38. [M] Consider the system with augmented matrix
[]
Aw
. Since
[]
10 10 2
01 20 3
,
00 01 1
00 00 0
A
−−
ªº
«»
−−
«»
«»
«»
«»
¬¼
w
the system is consistent and w is in Col
A. Also, since
85201 0
52122 0
10 8 6 3 1 0
32 100 0
A
−−
ªºªºªº
«»«»«»
−−
«»«»«»
==
«»«»«»
−−
«»«»«»
«»«»«»
¬¼¬¼¬¼
w
w is in Nul
A.
39. [M]
a. To show that
3
a
and
5
a
are in the column space of B, we can row reduce the matrices
[]
3
Ba
and
[]
3
Ba
:
212 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
[]
3
1001/3
0101/3
001 0
000 0
B
ªº
«»
«»
«»
«»
«»
¬¼
a
[]
5
100 10/3
010 26/3
001 4
000 0
B
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
a
Since both these systems are consistent,
3
a
and
5
a
are in the column space of B. Notice that the
same conclusions can be drawn by observing the reduced row echelon form for A:
101/30 10/3
011/30 26/3
00 01 4
00 00 0
A
ªº
«»
«»
«»
«»
«»
¬¼
b. We find the general solution of Ax = 0 in terms of the free variables by using the reduced row
echelon form of A given above:
135
(1/3) (10/3)xxx=−−
,
235
(1/3) (26/3)xxx=+
,
45
4xx=
with
3
x
and
5
x
free. So
1
2
353
4
5
1/3 10/3
1/3 26/3
,
10
04
01
x
x
xxx
x
x
−−
ªº
ª
ºª º
«»
«
»« »
«»
«
»« »
«»
«
»« »
== +
«»
«
»« »
«»
«
»« »
«»
«
»« »
¬
¼¬ ¼
¬¼
x
and a spanning set for Nul A is
1/3 10/3
1/3 26/3
,.
10
04
01
½
−−
ªºª º
°°
«»« »
°°
«»« »
°°
«»« »
®¾
«»« »
°°
«»« »
°°
«»« »
°°
¬¼¬ ¼
¯¿
c. The reduced row echelon form of A shows that the columns of A are linearly dependent and do
not span
4
. Thus by Theorem 12 in Section 1.9, T is neither one-to-one nor onto.
4.3 Solutions 213
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
40. [M] Since the line lies both in
12
Span{ , }H=vv
and in
34
Span{ , }K=vv
, w can be written both as
11 2 2
cc+vv
and
33 44
cc+vv
. To find w we must find the c
j
’s which solve
11 2 2 3 3 4 4
cc c c+−− =vvvv0
. Row reduction of
[]
12 3 4
−−
vv v v0
yields
51 2 00 100 10/30
33 1120 010 26/30,
84 5280 001 40
−−
ªºª º
«»« »
«»« »
«»« »
−−
¬¼¬ ¼
so the vector of c
j
’s must be a multiple of (10/3, –26/3, 4, 1). One simple choice is (10, –26, 12, 3),
which gives
1234
10 26 12 3 (24, 48, 24)==+=−−wv v vv
. Another choice for w is (1, –2, –1).
4.3 SOLUTIONS
Notes:
The definition for basis is given initially for subspaces because this emphasizes that the basis
elements must be in the subspace. Students often overlook this point when the definition is given for a
vector space (see Exercise 25). The subsection on bases for Nul A and Col A is essential for Sections 4.5
and 4.6. The subsection on “Two Views of a Basis” is also fundamental to understanding the interplay
between linearly independent sets, spanning sets, and bases. Key exercises in this section are Exercises
21–25, which help to deepen students’ understanding of these different subsets of a vector space.
1. Consider the matrix whose columns are the given set of vectors. This 3 × 3 matrix is in echelon form,
and has 3 pivot positions. Thus by the Invertible Matrix Theorem, its columns are linearly
independent and span
3
. So the given set of vectors is a basis for
3
.
2. Since the zero vector is a member of the given set of vectors, the set cannot be linearly independent
and thus cannot be a basis for
3
. Now consider the matrix whose columns are the given set of
vectors. This 3 × 3 matrix has only 2 pivot positions. Thus by the Invertible Matrix Theorem, its
columns do not span
3
.
3. Consider the matrix whose columns are the given set of vectors. The reduced echelon form of this
matrix is
132 101
011 011
341 000
ªºªº
«»«»
−∼ −
«»«»
«»«»
−−
¬¼¬¼
so the matrix has only two pivot positions. Thus its columns do not form a basis for
3
; the set of
vectors is linearly independent and does not span
3
.
4. Consider the matrix whose columns are the given set of vectors. The reduced echelon form of this
matrix is
228 100
135 010
124 001
ªºªº
«»«»
−− ∼
«»«»
«»«»
¬¼¬¼
so the matrix has three pivot positions. Thus its columns form a basis for
3
.
214 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
5. Since the zero vector is a member of the given set of vectors, the set cannot be linearly independent
and thus cannot be a basis for
3
. Now consider the matrix whose columns are the given set of
vectors. The reduced echelon form of this matrix is
3300 1000
3703 0100
0005 0001
ªºªº
«»«»
−−
«»«»
«»«»
¬¼¬¼
so the matrix has a pivot in each row. Thus the given set of vectors spans
3
.
6. Consider the matrix whose columns are the given set of vectors. Since the matrix cannot have a pivot
in each row, its columns cannot span
3
; thus the given set of vectors is not a basis for
3
. The
reduced echelon form of the matrix is
14 10
2301
46 00
ªºªº
«»«»
«»«»
«»«»
¬¼¬¼
so the matrix has a pivot in each column. Thus the given set of vectors is linearly independent.
7. Consider the matrix whose columns are the given set of vectors. Since the matrix cannot have a pivot
in each row, its columns cannot span
3
; thus the given set of vectors is not a basis for
3
. The
reduced echelon form of the matrix is
26 10
3101
05 00
ªºªº
«»«»
−∼
«»«»
«»«»
¬¼¬¼
so the matrix has a pivot in each column. Thus the given set of vectors is linearly independent.
8. Consider the matrix whose columns are the given set of vectors. Since the matrix cannot have a pivot
in each column, the set cannot be linearly independent and thus cannot be a basis for
3
. The reduced
echelon form of this matrix is
1020 1020
2310 0110
3151 0001
ªºªº
«»«»
−−
«»«»
«»«»
−−
¬¼¬¼
so the matrix has a pivot in each row. Thus the given set of vectors spans
3
.
9. We find the general solution of Ax = 0 in terms of the free variables by using the reduced echelon
form of A:
10 2 2 10 20
0114 0110.
3173 0001
−− −
ªºªº
«»«»
«»«»
«»«»
−−
¬¼¬¼
So
13
2xx=
,
23
xx=
,
4
0x=
, with
3
x
free. So
4.3 Solutions 215
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1
2
3
3
4
2
1,
1
0
x
xx
x
x
ªº ªº
«» «»
«» «»
==
«» «»
«» «»
¬¼
¬¼
x
and a basis for Nul A is
2
1.
1
0
½
ªº
°°
«»
°°
«»
®¾
«»
°°
«»
°°
¬¼
¯¿
10. We find the general solution of Ax = 0 in terms of the free variables by using the reduced echelon
form of A:
11 2 1 5 100 2 9
01 6 1 2 010 110.
00 8 016 001 0 2
−−
ªºªº
«»«»
−−∼ −
«»«»
«»«»
−−
¬¼¬¼
So
145
29xxx=+
,
24 5
10xx x=
,
35
2xx=
, with
4
x
and
5
x
free. So
1
2
34 5
4
5
29
110
,
02
10
01
x
x
xx x
x
x
ªº ªº ª º
«» «» « »
«» «» « »
«» «» « »
== +
«» «» « »
«» «» « »
«» «» « »
¬¼ ¬ ¼
¬¼
x
and a basis for Nul A is
29
110
,.
02
10
01
½
ªºª º
°°
«»« »
°°
«»« »
°°
«»« »
®¾
«»« »
°°
«»« »
°°
«»« »
°°
¬¼¬ ¼
¯¿
11. Let
[]
132
A
=
. Then we wish to find a basis for Nul A. We find the general solution of Ax = 0
in terms of the free variables: x = 3y 2z with y and z free. So
32
10,
01
x
yy z
z
ªº ªº ª º
«» «» « »
== +
«» «» « »
«» «» « »
¬¼ ¬¼ ¬ ¼
x
and a basis for Nul A is
32
1, 0 .
01
½
ªºª º
°°
«»« »
®¾
«»« »
°°
«»« »
¬¼¬ ¼
¯¿
216 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
12. We want to find a basis for the set of vectors in
2
in the line 3x + y = 0. Let
[]
31
A
=
. Then we
wish to find a basis for Nul A. We find the general solution of Ax = 0 in terms of the free variables: y
= – 3x with x free. So
1,
3
xx
y
ªº ª º
==
«» « »
¬¼ ¬ ¼
x
and a basis for Nul A is
1.
3
½
ªº
°°
®¾
«»
°°
¬¼
¯¿
13. Since B is a row echelon form of A, we see that the first and second columns of A are its pivot
columns. Thus a basis for Col A is
24
2, 6 .
38
½
ªºªº
°°
«»«»
®¾
«»«»
°°
«»«»
¬¼¬¼
¯¿
To find a basis for Nul A, we find the general solution of Ax = 0 in terms of the free variables:
134
65,xxx=−−
234
(5/2) (3/2) ,xxx=−−
with
3
x
and
4
x
free. So
1
2
34
3
4
65
5/2 3/2 ,
10
01
x
xxx
x
x
−−
ªº ª º ª º
«» «»«»
−−
«» «»«»
== +
«» «»«»
«» «»«»
«»«»
«» ¬¼¬¼
¬¼
x
and a basis for Nul A is
65
5/2 3/2
,.
10
01
½−−
ªºªº
°°
«»«»
−−
°°
«»«»
®¾
«»«»
°°
«»«»
°°
«»«»
¬¼¬¼
¯¿
14. Since B is a row echelon form of A, we see that the first, third, and fifth columns of A are its pivot
columns. Thus a basis for Col A is
138
108
,,.
239
309
½
ªºª ºªº
°°
«»« »«»
°°
«»« »«»
®¾
«»« »«»
°°
«»« »«»
°°
¬¼¬ ¼¬¼
¯¿
To find a basis for Nul A, we find the general solution of Ax = 0 in terms of the free variables,
mentally completing the row reduction of B to get:
124
22,xxx=−−
34
2,xx=
5
0,x=
with
2
x
and
4
x
free. So
4.3 Solutions 217
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1
2
32 4
4
5
22
10
,
02
01
00
x
x
xx x
x
x
−−
ªº ªº ªº
«» «» «»
«» «» «»
«» «» «»
== +
«» «» «»
«» «» «»
«» «» «»
¬¼ ¬¼
¬¼
x
and a basis for Nul A is
22
10
,.
02
01
00
½
−−
ªºªº
°°
«»«»
°°
«»«»
°°
«»«»
®¾
«»«»
°°
«»«»
°°
«»«»
°°
¬¼¬¼
¯¿
15. This problem is equivalent to finding a basis for Col A, where
[]
12345
A
=
vv vv v
. Since
the reduced echelon form of A is
10 2 2 3 10 200
01 2 1 1 01 200
,
22 810 6 00 010
33 0 3 9 00 001
ªºªº
«»«»
−−− −
«»«»
«»«»
−− −
«»«»
¬¼¬¼
we see that the first, second, fourth and fifth columns of A are its pivot columns. Thus a basis for the
space spanned by the given vectors is
10 2 3
01 1 1
,, , .
2210 6
33 3 9
½
ªºªºªºªº
°°
«»«»«»«»
−−
°°
«»«»«»«»
®¾
«»«»«»«»
−−
°°
«»«»«»«»
°°
¬¼¬¼¬¼¬¼
¯¿
16. This problem is equivalent to finding a basis for Col A, where
[]
12345
A
=
vv vv v
. Since
the reduced echelon form of A is
12352 1005/2 0
001310103/41/2
,
00131001 3 1
12140 000 0 0
−−
ªºª º
«»« »
−−−
«»« »
«»« »
«»« »
−−
¬¼¬ ¼
we see that the first, second, and third columns of A are its pivot columns. Thus a basis for the space
spanned by the given vectors is
123
001
,, .
001
121
½
ªº ª º ª º
°°
«» « » « »
°°
«» « » « »
®¾
«» « » « »
°°
«» « » « »
°°
¬¼ ¬ ¼ ¬ ¼
¯¿
17. [M] This problem is equivalent to finding a basis for Col A, where
[]
12345
A
=
vv vv v
.
Since the reduced echelon form of A is
218 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
24288 10010
00444 01020
,
42080 00110
64130 00001
04715100000
−− −
ªºªº
«»«»
«»«»
«»«»
−−
«»«»
−− −
«»«»
«»«»
¬¼¬¼
we see that the first, second, third, and fifth columns of A are its pivot columns. Thus a basis for the
space spanned by the given vectors is
2428
0044
,,, .
4200
6410
0471
½
−−
ªºªºªºªº
°°
«»«»«»«»
°°
«»«»«»«»
°°
«»«»«»«»
®¾
«»«»«»«»
°°
−−
«»«»«»«»
°°
«»«»«»«»
°°
¬¼¬¼¬¼¬¼
¯¿
18. [M] This problem is equivalent to finding a basis for Col A, where
[]
12345
.
A
=
vv vv v
Since the reduced echelon form of A is
330 66 10000
202 23 01020
,
694140 00110
000 0100001
761130 00000
−−
ªºªº
«»«»
«»«»
«»«»
−−− −
«»«»
«»«»
«»«»
−−
¬¼¬¼
we see that the first, second, third, and fifth columns of A are its pivot columns. Thus a basis for the
space spanned by the given vectors is
3306
2023
,,, .
6940
0001
7610
½
−−
ªºªºªºªº
°°
«»«»«»«»
°°
«»«»«»«»
°°
«»«»«»«»
−−
®¾
«»«»«»«»
°°
«»«»«»«»
°°
«»«»«»«»
−−
°°
¬¼¬¼¬¼¬¼
¯¿
19. Since
123
453 ,+=vvv0
we see that each of the vectors is a linear combination of the others. Thus
the sets
12
{, },vv
13
{, },vv
and
23
{,}vv
all span H. Since we may confirm that none of the three
vectors is a multiple of any of the others, the sets
12
{, },vv
13
{, },vv
and
23
{,}vv
are linearly
independent and thus each forms a basis for H.
20. Since
123
2,−−=vv v 0
we see that each of the vectors is a linear combination of the others. Thus
the sets
12
{, },vv
13
{, },vv
and
23
{,}vv
all span H. Since we may confirm that none of the three
vectors is a multiple of any of the others, the sets
12
{, },vv
13
{, },vv
and
23
{,}vv
are linearly
independent and thus each forms a basis for H.
21. a. False. The zero vector by itself is linearly dependent. See the paragraph preceding Theorem 4.
b . False. The set
1
{, , }
p
bb
must also be linearly independent. See the definition of a basis.
c . True. See Example 3.
4.3 Solutions 219
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
d . False. See the subsection “Two Views of a Basis.”
e . False. See the box before Example 9.
22. a. False. The subspace spanned by the set must also coincide with H. See the definition of a basis.
b . True. Apply the Spanning Set Theorem to V instead of H. The space V is nonzero because the
spanning set uses nonzero vectors.
c . True. See the subsection “Two Views of a Basis.”
d . False. See the two paragraphs before Example 8.
e . False. See the warning after Theorem 6.
23. Let
[]
1234
.
A
=
vv vv
Then A is square and its columns span
4
since
4
1234
Span{ , , , }.=vv v v
So its columns are linearly independent by the Invertible Matrix Theorem,
and
1234
{, , , }vv v v
is a basis for
4
.
24. Let
[
]
1
.
n
A
=…
vv
Then A is square and its columns are linearly independent, so its columns
span
n
by the Invertible Matrix Theorem. Thus
1
{, , }
n
vv
is a basis for
n
.
25. In order for the set to be a basis for H,
123
{, , }vv v
must be a spanning set for H; that is,
123
Span{ , , }.H=vv v
The exercise shows that H is a subset of
123
Span{ , , }.vv v
but there are vectors
in
123
Span{ , , }vv v
which are not in H (
1
v
and
3
,v
for example). So
123
Span{ , , },Hvv v
and
123
{, , }vv v
is not a basis for H.
26. Since sin t cos t = (1/2) sin 2t, the set {sin t, sin 2t} spans the subspace. By inspection we note that
this set is linearly independent, so {sin t, sin 2t} is a basis for the subspace.
27. The set {cos
ω
t, sin
ω
t} spans the subspace. By inspection we note that this set is linearly
independent, so {cos
ω
t, sin
ω
t} is a basis for the subspace.
28. The set
{, }
bt bt
ete
−−
spans the subspace. By inspection we note that this set is linearly independent,
so
{, }
bt bt
ete
−−
is a basis for the subspace.
29. Let A be the n × k matrix
[]
1k
vv
. Since A has fewer columns than rows, there cannot be a
pivot position in each row of A. By Theorem 4 in Section 1.4, the columns of A do not span
n
and
thus are not a basis for
n
.
30. Let A be the n × k matrix
[]
1k
vv
. Since A has fewer rows than columns, there cannot be a
pivot position in each column of A. By Theorem 8 in Section 1.7, the columns of A are not linearly
independent and thus are not a basis for
n
.
31. Suppose that
1
{, , }
p
vv
is linearly dependent. Then there exist scalars
1
,,
p
cc
not all zero with
11
.
pp
cc
+…+ =
vv0
Since T is linear,
11 1 1
()()()
pp p p
Tc c cT cT
+…+ = +…+
vvv v
and
220 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
11
()().
pp
Tc c T
+…+ = =
vv00
Thus
11
() ( )
pp
cT c T
+…+ =
vv0
and since not all of the
i
c
are zero,
1
{( ), , ( )}
p
TT
vv
is linearly dependent.
32. Suppose that
1
{( ), , ( )}
p
TT
vv
is linearly dependent. Then there exist scalars
1
,,
p
cc
not all zero
with
11
() ( ) .
pp
cT c T
+…+ =
vv0
Since T is linear,
11 1 1
()()()()
pp p p
Tc c cT cT T
+…+ = +…+ = =
vvv v00
Since T is one-to-one
11
()()
pp
Tc c T
+…+ =
vv0
implies that
11
.
pp
cc
+…+ =
vv0
Since not all of the
i
c
are zero,
1
{, , }
p
vv
is linearly dependent.
33. Neither polynomial is a multiple of the other polynomial. So
12
{, }pp
is a linearly independent set in
3
. Note:
12
{, }pp
is also a linearly independent set in
2
since
1
p
and
2
p
both happen to be in
2
.
34. By inspection,
312
=+ppp
, or
123
+=ppp 0
. By the Spanning Set Theorem,
123 12
Span{ , , } Span{ , }=pp p pp
. Since neither
1
p
nor
2
p
is a multiple of the other, they are linearly
independent and hence
12
{, }pp
is a basis for
123
Span{ , , }.pp p
35. Let
13
{, }vv
be any linearly independent set in a vector space V, and let
2
v
and
4
v
each be linear
combinations of
1
v
and
3
.v
For instance, let
21
5=vv
and
413
.=+vvv
Then
13
{, }vv
is a basis for
1234
Span{ , , , }.vv vv
36. [M] Row reduce the following matrices to identify their pivot columns:
[]
123
10 3 10 3
224 011
,
011000
114 000
ªºªº
«»«»
«»«»
=
«»«»
«»«»
−−
¬¼¬¼
uu u
so
12
{, }uu
is a basis for H.
[]
123
221 100
234 010
,
126 001
362 000
−−
ªºªº
«»«»
«»«»
=
«»«»
«»«»
−−
¬¼¬¼
vv v
so
123
{, , }vv v
is a basis for K.
4.3 Solutions 221
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
[]
123123
10 3 2 2 1
2242 34
011126
114 36 2
−−
ªº
«»
«»
=«»
−−
«»
−− −
¬¼
uu u vv v
10 3 200
01 1 100
,
00 0 010
00 0 001
ªº
«»
«»
«»
«»
¬¼
so
1223
{, , , }uu v v
is a basis for H + K.
37. [M] For example, writing
12 3 4
sin cos 2 sin cos 0ct c t c t c t t+++ =
with t = 0, .1, .2, .3 gives the following coefficent matrix A for the homogeneous system Ac = 0 (to
four decimal places):
0sin0cos0 sin0cos0 0 0 1 0
.1 sin .1 cos .2 sin .1cos .1 .1 .0998 .9801 .0993 .
.2 sin .2 cos .4 sin .2 cos .2 .2 .1987 .9211 .1947
.3 sin .3 cos .6 sin .3 cos .3 .3 .2955 .8253 .2823
A
ªºªº
«»«»
«»«»
==
«»«»
«»«»
«»«»
¬¼¬¼
This matrix is invertible, so the system Ac = 0 has only the trivial solution and {t, sin t, cos 2t,
sin t cos t} is a linearly independent set of functions.
38. [M] For example, writing
23456
12 3 4 5 6 7
1cos cos cos cos cos cos0
cc tc tc tc tc tc t
++++++=
with t = 0, .1, .2, .3, .4, .5, .6 gives the following coefficent matrix A for the homogeneous system Ac
= 0 (to four decimal places):
23456
23456
23456
23456
23456
234
1cos0 cos0 cos0 cos0 cos0 cos0
1cos.1cos.1cos.1cos.1cos.1cos.1
1cos.2cos.2cos.2cos.2cos.2cos.2
1cos.3cos.3cos.3cos.3cos.3cos.3
1cos.4cos.4cos.4cos.4cos.4cos.4
1cos.5cos.5cos.5cos.5
A=
56
23456
cos .5 cos .5
1cos.6cos.6cos.6cos.6cos.6cos.6
ªº
«»
«»
«»
«»
«»
«»
«»
«»
«»
«»
«»
¬¼
11 1 1 1 1 1
1 .9950 .9900 .9851 .9802 .9753 .9704
1.9801.9605.9414.9226.9042.8862
1.9553.9127.8719.8330.7958.7602
1 .9211 .8484 .7814 .7197 .6629 .6106
1 .8776 .7702 .6759 .5931 .5205 .4568
1 .8253 .6812 .5622 .4640 .3830 .3161
ª
«
«
«
«
=
¬
º
»
»
»
»
«»
«»
«»
«»
«»
¼
222 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
This matrix is invertible, so the system Ac = 0 has only the trivial solution and {1, cos t, cos
2
t, cos
3
t,
cos
4
t, cos
5
t, cos
6
t} is a linearly independent set of functions.
4.4 SOLUTIONS
Notes:
Section 4.7 depends heavily on this section, as does Section 5.4. It is possible to cover the
n
parts
of the two later sections, however, if the first half of Section 4.4 (and perhaps Example 7) is covered. The
linearity of the coordinate mapping is used in Section 5.4 to find the matrix of a transformation relative to
two bases. The change-of-coordinates matrix appears in Section 5.4, Theorem 8 and Exercise 27. The
concept of an isomorphism is needed in the proof of Theorem 17 in Section 4.8. Exercise 25 is used in
Section 4.7 to show that the change-of-coordinates matrix is invertible.
1. We calculate that
343
53 .
567
ªº ªºªº
=+=
«» «»«»
−−
¬¼ ¬¼¬¼
x
2. We calculate that
3426
(2) 5 .
211
−−
ªº ª º ª º
=+=
«» « » « »
¬¼ ¬ ¼ ¬ ¼
x
3. We calculate that
15 47
12 00 (2)3 4.
32 03
ªº ªº ªºªº
«» «» «»«»
=++−−=
«» «» «»«»
«» «» «»«»
¬¼ ¬¼ ¬¼¬¼
x
4. We calculate that
23 48
(3) 2 20 (1) 1 5.
02 31
ªº ªº ªºªº
«» «» «»«»
=++−−=
«» «» «»«»
«» «» «»«»
¬¼ ¬¼ ¬¼¬¼
x
5. The matrix
[]
12
bb x
row reduces to
10 2
,
01 1
ª
º
«
»
¬
¼
so
2
[] .
1
B
ª
º
=
«
»
¬
¼
x
6. The matrix
[]
12
bb x
row reduces to
10 3
,
01 2
ª
º
«
»
¬
¼
so
3
[] .
2
B
ª
º
=
«
»
¬
¼
x
7. The matrix
[]
123
bb b x
row reduces to
100 1
010 1,
001 3
ª
º
«
»
«
»
«
»
¬
¼
so
1
[] 1.
3
B
ª
º
«
»
=
«
»
«
»
¬
¼
x
4.4 Solutions 223
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
8. The matrix
[]
123
bb b x
row reduces to
100 1
010 1,
001 1
ª
º
«
»
«
»
«
»
¬
¼
so
1
[] 1.
1
B
ª
º
«
»
=
«
»
«
»
¬
¼
x
9. The change-of-coordinates matrix from B to the standard basis in
2
is
[]
12
12
.
35
B
P
ªº
==
«»
−−
¬¼
bb
10. The change-of-coordinates matrix from B to the standard basis in
3
is
[]
123
321
022.
643
B
P
ª
º
«
»
==
«
»
«
»
¬
¼
bbb
11. Since
1
B
P
converts x into its B-coordinate vector, we find that
1
1
13 2 532 5
[] .
25 5 215 1
BB
P
−−
ªºªºªºªºªº
== = =
«»«»«»«»«»
−−
¬¼¬¼¬¼¬¼¬¼
xx
12. Since
1
B
P
converts x into its B-coordinate vector, we find that
1
1
12 2 122 8
[] .
113 113 5
BB
P
−− −
ªºªºªºªºªº
== = =
«»«»«»«»«»
−−
¬¼¬¼¬¼¬¼¬¼
xx
13. We must find
1
c
,
2
c
, and
3
c
such that
22 2 2
123
(1 ) ( ) (1 2 ) ( ) 1 4 7 .ctcttc tt t tt++ ++ ++= =++
p
Equating the coefficients of the two polynomials produces the system of equations
13
23
12 3
1
24
7
cc
cc
cc c
+=
+=
++ =
We row reduce the augmented matrix for the system of equations to find
1011 100 2 2
0124 010 6,so[] 6.
1117 001 1 1
B
ªºª ºªº
«»« »«»
=
«»« »«»
«»« »«»
−−
¬¼¬ ¼¬¼
p
One may also solve this problem using the coordinate vectors of the given polynomials relative to the
standard basis
2
{1, , } ;tt
the same system of linear equations results.
14. We must find
1
c
,
2
c
, and
3
c
such that
22 2 2
123
(1 ) ( ) (1 ) ( ) 2 3 6 .ctcttc tt t tt+++= =+
p
Equating the coefficients of the two polynomials produces the system of equations
224 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
13
23
123
2
3
6
cc
cc
cc c
+=
=
−− +=
We row reduce the augmented matrix for the system of equations to find
10 12 100 3 3
0113 0102,so[] 2.
1116 0011 1
B
ªºªºªº
«»«»«»
−∼ =
«»«»«»
«»«»«»
−− −
¬¼¬¼¬¼
p
One may also solve this problem using the coordinate vectors of the given polynomials relative to the
standard basis
2
{1, , } ;tt
the same system of linear equations results.
15. a. True. See the definition of the B-coordinate vector.
b . False. See Equation (4).
c . False.
3
is isomorphic to
4
. See Example 5.
16. a. True. See Example 2.
b . False. By definition, the coordinate mapping goes in the opposite direction.
c . True. If the plane passes through the origin, as in Example 7, the plane is isomorphic to
2
.
17. We must solve the vector equation
123
1231
3871
xx x
ª
ºªºªºªº
++=
«
»«»«»«»
−−
¬
¼¬¼¬¼¬¼
. We row reduce the augmented
matrix for the system of equations to find
1231 1055
.
3871 0112
−−
ªºªº
«»«»
−− −
¬¼¬¼
Thus we can let
13
55
xx
=+
and
23
2
xx
=−−
, where
3
x
can be any real number. Letting
30
x
=
and
31
x
=
produces two different ways to express
1
1
ª
º
«
»
¬
¼
as a linear combination of the other vectors:
12
52vv
and
23
10 3+
1
vvv
. There are infintely many correct answers to this problem.
18. For each k,
1
01 0
kkn
=+⋅⋅⋅ ++⋅⋅⋅+bb b b
, so
[] (0,,1,,0) .
kB k
=…… =be
19. The set S spans V because every x in V has a representation as a (unique) linear combination of
elements in S. To show linear independence, suppose that
1
{, , }
n
S
=…vv
and that
11 nn
cc
+⋅⋅⋅ +=vv0
for some scalars
1
c
, ,
.
n
c
The case when
10
n
cc
=⋅⋅⋅ ==
is one possibility.
By hypothesis, this is the unique (and thus the only) possible representation of the zero vector as a
linear combination of the elements in S. So S is linearly independent and is thus a basis for V.
20. For w in V there exist scalars
1
k
,
2
k
,
3
k
, and
4
k
such that
11 2 2 3 3 4 4
kk k k
=+ + +wv v v v
(1)
because
1234
{, , , }vv vv
spans V. Because the set is linearly dependent, there exist scalars
1
c
,
2
c
,
3
c
,
and
4
c
not all zero, such that
11 2 2 3 3 4 4
cc c c
=+ + +0v v v v
(2)
Adding (1) and (2) gives
4.4 Solutions 225
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
111 2 22 3 33 4 44
()( )( )( )
kc k c kc k c
=+= + + + + + + +ww0 v v v v
(3)
At least one of the weights in (3) differs from the corresponding weight in (1) because at least one of
the
i
c
is nonzero. So w is expressed in more than one way as a linear combination of
1
v
,
2
v
,
3
v
,
and
4.v
21. The matrix of the transformation will be
1
1
12 92
49 41
B
P
ª
ºª º
==
«
»« »
¬
¼¬ ¼
.
22. The matrix of the transformation will be
[]
1
1
1
.
Bn
P
=⋅⋅⋅bb
23. Suppose that
[] [ ]
BB
n
c
c
1
ªº
«»
==.
«»
«»
¬¼
uw
#
By definition of coordinate vectors,
11
.
nn
cc
== +⋅⋅⋅ +uw b b
Since u and w were arbitrary elements of V, the coordinate mapping is one-to-one.
24. Given
1
(, , )
n
yy
=…y
in
n
, let
11 nn
yy
=+⋅⋅⋅ +ub b
. Then, by definition,
[]
B
=uy
. Since y was
arbitrary, the coordinate mapping is onto
n
.
25. Since the coordinate mapping is one-to-one, the following equations have the same solutions
1
,,
p
cc
:
11
pp
cc+⋅⋅⋅ +=
uu0
(the zero vector in V ) (4)
[]
11 pp B
B
cc
ªº
+⋅⋅⋅ +=
¬¼
uu0
(the zero vector in
n
) (5)
Since the coordinate mapping is linear, (5) is equivalent to
11
0
[] [ ]
0
BppB
cc
ªº
«»
+⋅⋅⋅ +=
«»
«»
¬¼
uu
#
(6)
Thus (4) has only the trivial solution if and only if (6) has only the trivial solution. It follows that
1
{, , }
p
uu
is linearly independent if and only if
1
{[ ] , ,[ ] }
BpB
uu
is linearly independent. This
result also follows directly from Exercises 31 and 32 in Section 4.3.
26. By definition, w is a linear combination of
1
,,
p
uu
if and only if there exist scalars
1
,,
p
cc
such
that
11
pp
cc=+⋅⋅⋅ +
wu u
(7)
Since the coordinate mapping is linear,
11
[] [ ] [ ]
BBppB
cc=+⋅⋅ ⋅ +
wu u
(8)
Conversely, (8) implies (7) because the coordinate mapping is one-to-one. Thus w is a linear
combination of
1
,,
p
uu
if and only if
[]
B
w
is a linear combination of
1
[], ,[ ].
BpB
uu
226 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Note:
Students need to be urged to write not just to compute in Exercises 27–34. The language in the
Study Guide solution of Exercise 31 provides a model for the students. In Exercise 32, students may have
difficulty distinguishing between the two isomorphic vector spaces, sometimes giving a vector in
3
as an
answer for part (b).
27. The coordinate mapping produces the coordinate vectors (1, 0, 0, 2), (2, 1, –3, 0), and (0, –1, 2, –1)
respectively. We test for linear independence of these vectors by writing them as columns of a matrix
and row reducing:
120 100
011010
.
032 001
201 000
ªºªº
«»«»
«»«»
«»«»
«»«»
¬¼¬¼
Since the matrix has a pivot in each column, its columns (and thus the given polynomials) are
linearly independent.
28. The coordinate mapping produces the coordinate vectors (1, 0, –2, –1), (0, 1, 0, 2), and (1, 1, –2, 0)
respectively. We test for linear independence of these vectors by writing them as columns of a matrix
and row reducing:
10 1 100
01 1 010
.
20 2 001
12 0 000
ªºªº
«»«»
«»«»
«»«»
−−
«»«»
¬¼¬¼
Since the matrix has a pivot in each column, its columns (and thus the given polynomials) are
linearly independent.
29. The coordinate mapping produces the coordinate vectors (1, –2, 1, 0), (0, 1, –2, 1), and (1, –3, 3, –1)
respectively. We test for linear independence of these vectors by writing them as columns of a matrix
and row reducing:
101 101
213011
.
123 000
011000
ªºªº
«»«»
−− −
«»«»
«»«»
«»«»
¬¼¬¼
Since the matrix does not have a pivot in each column, its columns (and thus the given polynomials)
are linearly dependent.
30. The coordinate mapping produces the coordinate vectors (8, –12, 6, –1), (9, –6, 1, 0), and (1, 6, –5,1)
respectively. We test for linear independence of these vectors by writing them as columns of a matrix
and row reducing:
891 101
12 6 6 0 1 1 .
615000
10 1 000
ªºªº
«»«»
−−
«»«»
«»«»
«»«»
¬¼¬¼
Since the matrix does not have a pivot in each column, its columns (and thus the given polynomials)
are linearly dependent.
4.4 Solutions 227
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
31. In each part, place the coordinate vectors of the polynomials into the columns of a matrix and reduce
the matrix to echelon form.
a.
1341 1341
3550 0473
5761 0000
−− − −
ªºªº
«»«»
−∼
«»«»
«»«»
−− −
¬¼¬¼
Since there is not a pivot in each row, the original four column vectors do not span
3
. By the
isomorphism between
3
and
2
, the given set of polynomials does not span
2
.
b.
0132 122 0
5843 026 3
1220 0007/2
−−
ªºª º
«»« »
−−∼ −
«»« »
«»« »
¬¼¬ ¼
Since there is a pivot in each row, the original four column vectors span
3
. By the isomorphism
between
3
and
2
, the given set of polynomials spans
2
.
32. a. Place the coordinate vectors of the polynomials into the columns of a matrix and reduce the
matrix to echelon form:
10 1 101
011 011
133 001
ªºªº
«»«»
«»«»
«»«»
−− −
¬¼¬¼
The resulting matrix is invertible since it row equivalent to
3
.
I
The original three column vectors
form a basis for
3
by the Invertible Matrix Theorem. By the isomorphism between
3
and
2
, the
corresponding polynomials form a basis for
2
.
b. Since
[] (1,1,2),
B
=q
12 3
2.=++qpp p
One might do the algebra in
2
or choose to compute
10 11 1
0111 3.
1332 10
ªºªºªº
«»«»«»
=
«»«»«»
«»«»«»
−− −
¬¼¬¼¬¼
This combination of the columns of the matrix corresponds to the
same combination of
1
,p
2
,p
and
3
.p
So
2
() 1 3 10 .ttt=+
q
33. The coordinate mapping produces the coordinate vectors (3, 7, 0, 0), (5, 1, 0, –2), (0, 1, –2, 0) and
(1, 16, –6, 2) respectively. To determine whether the set of polynomials is a basis for
3
, we
investigate whether the coordinate vectors form a basis for
4
. Writing the vectors as the columns of
a matrix and row reducing
3501 1002
71116 0101
,
0026 0013
0202 0000
ªºªº
«»«»
«»«»
«»«»
−−
«»«»
«»«»
¬¼¬¼
we find that the matrix is not row equivalent to
4
.
I
Thus the coordinate vectors do not form a basis
for
4
. By the isomorphism between
4
and
3
, the given set of polynomials does not form a basis for
3
.
34. The coordinate mapping produces the coordinate vectors (5, –3, 4, 2), (9, 1, 8, –6), (6, –2, 5, 0), and
(0, 0, 0, 1) respectively. To determine whether the set of polynomials is a basis for
3
, we investigate
whether the coordinate vectors form a basis for
4
. Writing the vectors as the columns of a matrix,
and row reducing
228 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
5960 103/40
3120 011/40
4850 00 01
260100 00
ªºªº
«»«»
−−
«»«»
«»«»
«»«»
«»«»
¬¼¬¼
we find that the matrix is not row equivalent to I
4
. Thus the coordinate vectors do not form a basis for
4
. By the isomorphism between
4
and
3
, the given set of polynomials does not form a basis for
3
.
35. To show that x is in
12
Span{ , },
H
=vv
we must show that the vector equation
11 2 2
xx
+=vvx
has a
solution. The augmented matrix
[]
12
vv x
may be row reduced to show
11 14 19 1 0 5 / 3
5813 018/3
.
10 13 18 0 0 0
710 15 00 0
ªºªº
«»«»
−−−
«»«»
«»«»
«»«»
«»«»
¬¼¬¼
Since this system has a solution, x is in H. The solution allows us to find the B-coordinate vector for
x: since
11 2 2 1 2
(5/3) (8/3)
xx
=+ =+xv v v v
,
5/3
[] 8/3
B
ª
º
=
«
»
¬
¼
x
.
36. To show that x is in
123
Span{ , , }
H
=vv v
, we must show that the vector equation
11 2 2 3 3
xx x
++=vv vx
has a solution. The augmented matrix
[]
123
vv vx
may be row
reduced to show
6894 1003
4357 0105
.
9788 0012
4333 0000
−−
ªºªº
«»«»
«»«»
«»«»
−−
«»«»
«»«»
¬¼¬¼
The first three columns show that B is a basis for H. Moreover, since this system has a solution, x is
in H. The solution allows us to find the B-coordinate vector for x: since
11 2 2 3 3 1 2 3
35 2
xx x
=+ + =++xv v v vv v
,
3
[] 5.
2
B
ª
º
«
»
=
«
»
«
»
¬
¼
x
37. We are given that
1/2
[] 1/4,
1/6
B
ªº
«»
=«»
«»
¬¼
x
where
2.6 0 0
1.5 , 3 , 0 .
004.8
B
½
ª
ºªºª º
°
°
«
»«»« »
=
®
¾
«
»«»« »
°
°
«
»«»« »
¬
¼¬¼¬ ¼
¯¿
To find the coordinates of x
relative to the standard basis in
3
, we must find x. We compute that
2.6 0 0 1/ 2 1.3
[] 1.5 3 0 1/4 0 .
004.81/6 0.8
BB
P
ªºªºªº
«»«»«»
===
«»«»«»
«»«»«»
¬¼¬¼¬¼
xx
4.5 Solutions 229
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
38. We are given that
1/2
[] 1/2,
1/3
B
ªº
«»
=«»
«»
¬¼
x
where
2.6 0 0
1.5 , 3 , 0 .
004.8
B
½
ª
ºªºª º
°
°
«
»«»« »
=
®
¾
«
»«»« »
°
°
«
»«»« »
¬
¼¬¼¬ ¼
¯¿
To find the coordinates of x
relative to the standard basis in
3
, we must find x. We compute that
2.6 0 0 1/ 2 1.3
[] 1.5 3 0 1/2 0.75.
004.81/3 1.6
BB
P
ªºªºªº
«»«»«»
===
«»«»«»
«»«»«»
¬¼¬¼¬¼
xx
4.5 SOLUTIONS
Notes:
Theorem 9 is true because a vector space isomorphic to
n
has the same algebraic properties as
n
; a proof of this result may not be needed to convince the class. The proof of Theorem 9 relies upon the
fact that the coordinate mapping is a linear transformation (which is Theorem 8 in Section 4.4). If you
have skipped this result, you can prove Theorem 9 as is done in Introduction to Linear Algebra by Serge
Lang (Springer-Verlag, New York, 1986). There are two separate groups of true-false questions in this
section; the second batch is more theoretical in nature. Example 4 is useful to get students to visualize
subspaces of different dimensions, and to see the relationships between subspaces of different
dimensions. Exercises 31 and 32 investigate the relationship between the dimensions of the domain and
the range of a linear transformation; Exercise 32 is mentioned in the proof of Theorem 17 in Section 4.8.
1. This subspace is
12
Span{ , },
H
=vv
where
1
1
1
0
ª
º
«
»
=
«
»
«
»
¬
¼
v
and
2
2
1.
3
ª
º
«
»
=
«
»
«
»
¬
¼
v
Since
1
v
and
2
v
are not
multiples of each other,
12
{, }vv
is linearly independent and is thus a basis for H. Hence the
dimension of H is 2.
2. This subspace is
12
Span{ , },
H
=vv
where
1
2
0
2
ª
º
«
»
=
«
»
«
»
¬
¼
v
and
2
0
4.
0
ª
º
«
»
=
«
»
«
»
¬
¼
v
Since
1
v
and
2
v
are not
multiples of each other,
12
{, }vv
is linearly independent and is thus a basis for H. Hence the
dimension of H is 2.
3. This subspace is
123
Span{ , , },
H
=vv v
where
1
0
1,
0
1
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
v
2
0
1,
1
2
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
v
and
3
2
0.
3
0
ªº
«»
«»
=«»
«»
«»
¬¼
v
Theorem 4 in
Section 4.3 can be used to show that this set is linearly independent:
1
,v0
2
v
is not a multiple of
1
,v
and (since its first entry is not zero)
3
v
is not a linear combination of
1
v
and
2
.v
Thus
123
{, , }vv v
is linearly independent and is thus a basis for H. Alternatively, one can show that this set
is linearly independent by row reducing the matrix
[]
123
.
vv v0
Hence the dimension of the
subspace is 3.
230 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
4. This subspace is
12
Span{ , },
H
=vv
where
1
1
1
3
1
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
v
and
2
2
0.
1
1
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
v
Since
1
v
and
2
v
are not
multiples of each other,
12
{, }vv
is linearly independent and is thus a basis for H. Hence the
dimension of H is 2.
5. This subspace is
123
Span{ , , },
H
=vv v
where
1
1
2,
0
3
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
v
2
2
0,
2
0
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
v
and
3
0
5.
2
6
ªº
«»
«»
=«»
«»
¬¼
v
The matrix A
with these vectors as its columns row reduces to
120 100
205 010
.
022 001
306 000
ª
ºª º
«
»« »
«
»« »
«
»« »
«
»« »
¬
¼¬ ¼
There is a pivot in
each column, so
123
{, , }vv v
is linearly independent and is thus a basis for H. Hence the dimension
of H is 3.
6. This subspace is
123
Span{ , , },
H
=vv v
where
1
3
0,
7
3
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
v
2
0
1,
6
0
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
v
and
3
1
3.
5
1
ªº
«»
«»
=«»
«»
¬¼
v
The matrix A
with these vectors as its columns row reduces to
30 1 100
013 010
.
765 001
30 1 000
ª
ºª º
«
»« »
−−
«
»« »
«
»« »
«
»« »
¬
¼¬ ¼
There is a pivot in
each column, so
123
{, , }vv v
is linearly independent and is thus a basis for H. Hence the dimension
of H is 3.
7. This subspace is H = Nul A, where
131
012.
021
A
ª
º
«
»
=
«
»
«
»
¬
¼
Since
[]
1000
0100,
0010
A
ª
º
«
»
«
»
«
»
¬
¼
0
the
homogeneous system has only the trivial solution. Thus H = Nul A = {0}, and the dimension of H is
0.
8. From the equation a 3b + c = 0, it is seen that (a, b, c, d) = b(3, 1, 0, 0) + c(–1, 0, 1, 0) + d(0, 0, 0,
1). Thus the subspace is
123
Span{ , , },
H
=vv v
where
1
(3,1, 0,0),=v
2
(1,0,1,0),=v
and
3
(0,0,0,1).=v
It is easily checked that this set of vectors is linearly independent, either by appealing
to Theorem 4 in Section 4.3, or by row reducing
[]
123
.
vv v0
Hence the dimension of the
subspace is 3.
4.5 Solutions 231
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
9. This subspace is
:, in
a
Hbab
a
ªº
°«»
=®«»
°«»
¬¼
¯
12
Span{ , },
½
°
=
¾
°
¿
vv
where
1
1
0
1
ª
º
«
»
=
«
»
«
»
¬
¼
v
and
2
0
1.
0
ªº
«»
=«»
«»
¬¼
v
Since
1
v
and
2
v
are not multiples of each other,
12
{, }vv
is linearly independent and is thus a basis for H.
Hence the dimension of H is 2.
10. The matrix A with these vectors as its columns row reduces to
123 123
.
51015 0 0 0
−− −−
ªºªº
«»«»
¬¼¬¼
There is one pivot column, so the dimension of Col A (which is the dimension of H) is 1.
11. The matrix A with these vectors as its columns row reduces to
13 2 5 10 10
01 12 01 10.
21 12 00 01
ªºªº
«»«»
−∼ −
«»«»
«»«»
¬¼¬¼
There are three pivot columns, so the dimension of Col A (which is the dimension of the subspace
spanned by the vectors) is 3.
12. The matrix A with these vectors as its columns row reduces to
1323 1001
2635 0100.
0655 0011
−− −
ªºªº
«»«»
−− ∼
«»«»
«»«»
¬¼¬¼
There are three pivot columns, so the dimension of Col A (which is the dimension of the subspace
spanned by the vectors) is 3.
13. The matrix A is in echelon form. There are three pivot columns, so the dimension of Col A is 3.
There are two columns without pivots, so the equation Ax = 0 has two free variables. Thus the
dimension of Nul A is 2.
14. The matrix A is in echelon form. There are four pivot columns, so the dimension of Col A is 4. There
are three columns without pivots, so the equation Ax = 0 has three free variables. Thus the dimension
of Nul A is 3.
15. The matrix A is in echelon form. There are three pivot columns, so the dimension of Col A is 3.
There are two columns without pivots, so the equation Ax = 0 has two free variables. Thus the
dimension of Nul A is 2.
16. The matrix A row reduces to
32 10
.
65 01
ªºªº
«»«»
¬¼¬¼
There are two pivot columns, so the dimension of Col A is 2. There are no columns without pivots,
so the equation Ax = 0 has only the trivial solution 0. Thus Nul A = {0}, and the dimension of Nul A
is 0.
232 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
17. The matrix A is in echelon form. There are three pivot columns, so the dimension of Col A is 3.
There are no columns without pivots, so the equation Ax = 0 has only the trivial solution 0. Thus Nul
A = {0}, and the dimension of Nul A is 0.
18. The matrix A is in echelon form. There are two pivot columns, so the dimension of Col A is 2. There
is one column without a pivot, so the equation Ax = 0 has one free variable. Thus the dimension of
Nul A is 1.
19. a. True. See the box before Example 5.
b . False. The plane must pass through the origin; see Example 4.
c . False. The dimension of
n
is n + 1; see Example 1.
d . False. The set S must also have n elements; see Theorem 12.
e . True. See Theorem 9.
20. a. False. The set
2
is not even a subset of
3
.
b . False. The number of free variables is equal to the dimension of Nul A; see the box before
Example 5.
c . False. A basis could still have only finitely many elements, which would make the vector space
finite-dimensional.
d . False. The set S must also have n elements; see Theorem 12.
e . True. See Example 4.
21. The matrix whose columns are the coordinate vectors of the Hermite polynomials relative to the
standard basis
23
{1, , , }tt t
of
3
is
10 2 0
02 0 12
.
00 4 0
00 0 8
A
ªº
«»
«»
=«»
«»
«»
¬¼
This matrix has 4 pivots, so its columns are linearly independent. Since their coordinate vectors form
a linearly independent set, the Hermite polynomials themselves are linearly independent in
3
. Since
there are four Hermite polynomials and dim
3
= 4, the Basis Theorem states that the Hermite
polynomials form a basis for
3
.
22. The matrix whose columns are the coordinate vectors of the Laguerre polynomials relative to the
standard basis
23
{1, , , }tt t
of
3
is
112 6
01418
.
00 1 9
000 1
A
ªº
«»
−− −
«»
=«»
«»
«»
¬¼
This matrix has 4 pivots, so its columns are linearly independent. Since their coordinate vectors form
a linearly independent set, the Laguerre polynomials themselves are linearly independent in
3
. Since
there are four Laguerre polynomials and dim
3
= 4, the Basis Theorem states that the Laguerre
polynomials form a basis for
3
.
4.5 Solutions 233
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
23. The coordinates of
23
() 1 8 8ttt=++
p
with respect to B satisfy
2323
12 3 4
(1) (2 ) ( 2 4 ) ( 12 8 ) 1 8 8cctc tctt tt+++++=++
Equating coefficients of like powers of t produces the system of equations
13
24
3
4
21
2120
48
88
cc
cc
c
c
=
=
=
=
Solving this system gives
1
3,
c
=
2
6,
c
=
3
2,
c
=
4
1,
c
=
and
3
6
[] .
2
1
B
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
p
24. The coordinates of
2
() 5 5 2ttt=+
p
with respect to B satisfy
22
12 3
(1) (1 ) (2 4 ) 5 5 2cctc tt tt+++=+
Equating coefficients of like powers of t produces the system of equations
123
23
3
25
45
2
ccc
cc
c
++=
−− =
=
Solving this system gives
1
6,
c
=
2
3,
c
=
3
2,
c
=
and
6
[] 3.
2
B
ª
º
«
»
=
«
»
«
»
¬
¼
p
25. Note first that n 1 since S cannot have fewer than 1 vector. Since n 1, V 0. Suppose that S spans
V and that S contains fewer than n vectors. By the Spanning Set Theorem, some subset S of S is a
basis for V. Since S contains fewer than n vectors, and S is a subset of S, S also contains fewer
than n vectors. Thus there is a basis S for V with fewer than n vectors, but this is impossible by
Theorem 10 since dimV = n. Thus S cannot span V.
26. If dimV = dim H = 0, then V = {0} and H = {0}, so H = V. Suppose that dim V = dim H > 0. Then H
contains a basis S consisting of n vectors. But applying the Basis Theorem to V, S is also a basis for
V. Thus H = V = SpanS.
27. Suppose that dim = k < . Now
n
is a subspace of for all n, and dim
k–1
= k, so dim
k–1
= dim
. This would imply that
k–1
= , which is clearly untrue: for example,
()
k
tt=
p
is in but not in
k–1
. Thus the dimension of cannot be finite.
28. The space C() contains as a subspace. If C( ) were finite-dimensional, then would also be
finite-dimensional by Theorem 11. But is infinite-dimensional by Exercise 27, so C( ) must also
be infinite-dimensional.
29. a. True. Apply the Spanning Set Theorem to the set
1
{, , }
p
vv
and produce a basis for V. This
basis will not have more than p elements in it, so dimV p.
234 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b . True. By Theorem 11,
1
{, , }
p
vv
can be expanded to find a basis for V. This basis will have at
least p elements in it, so dimV p.
c . True. Take any basis (which will contain p vectors) for V and adjoin the zero vector to it.
30. a. False. For a counterexample, let v be a non-zero vector in
3
, and consider the set {v, 2v}. This is
a linearly dependent set in
3
, but dim
3
32=>
.
b . True. If dimV p, there is a basis for V with p or fewer vectors. This basis would be a spanning
set for V with p or fewer vectors. If necessary, vectors in V could be added to this spanning set to
give a spanning set for V with exactly p vectors, which contradicts the assumption.
c . False. For a counterexample, let v be a non-zero vector in
3
, and consider the set {v, 2v}. This is
a linearly dependent set in
3
with 3 – 1 = 2 vectors, and dim
3
3=
.
31. Since H is a nonzero subspace of a finite-dimensional vector space V, H is finite-dimensional and has
a basis. Let
1
{, , }
p
uu
be a basis for H. We show that the set
1
{( ), , ( )}
p
TT
uu
spans T(H). Let y
be in T(H). Then there is a vector x in H with T(x) = y. Since x is in H and
1
{, , }
p
uu
is a basis for
H, x may be written as
11
pp
cc=++
xu u
for some scalars
1
,,.
p
cc
Since the transformation T is
linear,
11 1 1
() ( ) ( ) ( )
pp p p
TTc c cT cT== ++ = ++
yx u u u u
Thus y is a linear combination of
1
(),,( )
p
TT
uu
, and
1
{( ), , ( )}
p
TT
uu
spans T(H). By the
Spanning Set Theorem, this set contains a basis for T(H). This basis then has not more than p vectors,
and dimT(H) p = dim H.
32. Since H is a nonzero subspace of a finite-dimensional vector space V, H is finite-dimensional and has
a basis. Let
1
{, }
p
uu
be a basis for H. In Exercise 31 above it was shown that
1
{( ), , ( )}
p
TT
uu
spans T(H). In Exercise 32 in Section 4.3, it was shown that
1
{( ), , ( )}
p
TT
uu
is linearly
independent. Thus
1
{( ), , ( )}
p
TT
uu
is a basis for T(H), and dimT(H) = p = dim H.
33. [M]
a. To find a basis for
5
which contains the given vectors, we row reduce
99610000 1001/3001 3/7
74701000 010 0001 5/7
.
818001000011/3000 3/7
56500010 000 010322/7
77700001000 001953/7
−−
ªºª º
«»« »
«»« »
«»« »
−−
«»« »
«»« »
«»« »
−− −−
¬¼¬ ¼
The first, second, third, fifth, and sixth columns are pivot columns, so these columns of the
original matrix (
12323
{, , , ,}vv vee
) form a basis for
5
:
4.6 Solutions 235
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b. The original vectors are the first k columns of A. Since the set of original vectors is assumed
to be linearly independent, these columns of A will be pivot columns and the original set of
vectors will be included in the basis. Since the columns of A include all the columns of the
identity matrix, Col A =
n
.
34. [M]
a. The B-coordinate vectors of the vectors in C are the columns of the matrix
10 1 0 1 0 1
01 0 3 0 5 0
00 2 0 8 0 18
.
00 0 4 0 20 0
00 0 0 8 0 48
00 0 0 0 16 0
00 0 0 0 0 32
P
−−
ªº
«»
«»
«»
«»
=
«»
«»
«»
«»
«»
¬¼
The matrix P is invertible because it is triangular with nonzero entries along its main
diagonal. Thus its columns are linearly independent. Since the coordinate mapping is an
isomorphism, this shows that the vectors in C are linearly independent.
b. We know that dim H = 7 because B is a basis for H. Now C is a linearly independent set, and
the vectors in C lie in H by the trigonometric identities. Thus by the Basis Theorem, C is a
basis for H.
4.6 SOLUTIONS
Notes:
This section puts together most of the ideas from Chapter 4. The Rank Theorem is the main result
in this section. Many students have difficulty with the difference in finding bases for the row space and
the column space of a matrix. The first process uses the nonzero rows of an echelon form of the matrix.
The second process uses the pivots columns of the original matrix, which are usually found through row
reduction. Students may also have problems with the varied effects of row operations on the linear
dependence relations among the rows and columns of a matrix. Problems of the type found in Exercises
19–26 make excellent test questions. Figure 1 and Example 4 prepare the way for Theorem 3 in Section
6.1; Exercises 27–29 anticipate Example 6 in Section 7.4.
1. The matrix B is in echelon form. There are two pivot columns, so the dimension of Col A is 2. There
are two pivot rows, so the dimension of Row A is 2. There are two columns without pivots, so the
equation Ax = 0 has two free variables. Thus the dimension of Nul A is 2. A basis for Col A is the
pivot columns of A:
14
1, 2 .
56
½
ªºªº
°°
«»«»
®¾
«»«»
°°
«»«»
¬¼¬¼
¯¿
A basis for Row A is the pivot rows of B:
{
}
(1, 0, 1, 5), ( 0, 2, 5, 6) .−−
To find a basis for Nul A row
reduce to reduced echelon form:
236 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
10 1 5
.
01 5/23
A
ªº
«»
¬¼
The solution to
A=x0
in terms of free variables is
13 4
5
xx x
=
,
234
(5/ 2) 3
xxx
=
with
3
x
and
4
x
free. Thus a basis for Nul A is
15
5/2 3
,.
10
01
½
ªºªº
°°
«»«»
°°
«»«»
®¾
«»«»
°°
«»«»
°°
«»«»
¬¼¬¼
¯¿
2. The matrix B is in echelon form. There are three pivot columns, so the dimension of Col A is 3.
There are three pivot rows, so the dimension of Row A is 3. There are two columns without pivots,
so the equation
A=x0
has two free variables. Thus the dimension of Nul A is 2. A basis for Col A is
the pivot columns
of A:
14 2
26 3
,, .
33 3
30 0
½
ªºªºª º
°°
«»«»« »
°°
«»«»« »
®¾
«»«»« »
°°
«»«»« »
°°
¬¼¬¼¬ ¼
¯¿
A basis for Row A is the pivot rows of B:
{
}
(1, 3, 4, 1, 2), (0, 0,1, 1,1), (0, 0, 0, 0, 5) .−− −
To find a basis
for Nul A row reduce to reduced echelon form:
130 30
001 10
.
000 01
000 00
A
ªº
«»
«»
«»
«»
¬¼
The solution to
A=x0
in terms of free variables is
124
33
xxx
=−−
,
34
xx
=
,
5
0
x
=
, with
2
x
and
4
x
free. Thus a basis for Nul A is
33
10
,.
01
01
00
½
−−
ªºªº
°°
«»«»
°°
«»«»
°°
«»«»
®¾
«»«»
°°
«»«»
°°
«»«»
°°
¬¼¬¼
¯¿
3. The matrix B is in echelon form. There are three pivot columns, so the dimension of Col A is 3.
There are three pivot rows, so the dimension of Row A is 3. There are three columns without pivots,
so the equation
A=x0
has three free variables. Thus the dimension of Nul A is 3. A basis for Col A
is the pivot columns of A:
263
230
,, .
493
233
½
ªºªºªº
°°
«»«»«»
−−
°°
«»«»«»
®¾
«»«»«»
°°
«»«»«»
°°
¬¼¬¼¬¼
¯¿
4.6 Solutions 237
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
A basis for Row A is the pivot rows of B:
{
}
(2,6, 6,6,3,6),(0,3,0,3,3,0),(0,0,0,0,3,0) .
To find a
basis for Nul A row reduce to reduced echelon form:
10 3003
01 0100
.
00 0010
00 0000
A
ªº
«»
«»
«»
«»
¬¼
The solution to
A=x0
in terms of free variables is
136
33,
xxx
=
24
,
xx
=
5
0,
x
=
with
3
x
,
4
x
,
and
6
x
free. Thus a basis for Nul A is
303
010
100
,, .
010
000
001
½
ªº ª ºª º
°°
«» « »« »
°°
«» « »« »
°°
«» « »« »
°°
«» « »« »
®¾
«» « »« »
°°
«» « »« »
°°
«» « »« »
°°
«» « »« »
°°
¬¼ ¬ ¼¬ ¼
¯¿
4. The matrix B is in echelon form. There are five pivot columns, so the dimension of Col A is 5. There
are five pivot rows, so the dimension of Row A is 5. There is one column without a pivot, so the
equation
A=x0
has one free variable. Thus the dimension of Nul A is 1. A basis for Col A is the
pivot columns of A:
11212
1232 3
,,,, .
11016
12230
12 12 1
½
−−
ªº ª º ª º ª º ª º
°°
«» « » « » « » « »
−− −
°°
«» « » « » « » « »
°°
«» « » « » « » « »
®¾
«» « » « » « » « »
°°
−−
«» « » « » « » « »
°°
«» « » « » « » « »
−−
°°
¬¼ ¬ ¼ ¬ ¼ ¬ ¼ ¬ ¼
¯¿
A basis for Row A is the pivot rows of B:
{
}
(1, 1, 2, 0, 1, 2), (0, 1, 1, 0, 3, 1), (0, 0, 1, 1, 13, 1), (0, 0, 0, 0,1, 1), (0, 0, 0, 0, 0,1) .−− −
To find a basis for Nul A row reduce to reduced echelon form:
100 100
010100
.
001100
000010
000001
A
ªº
«»
«»
«»
«»
«»
«»
¬¼
The solution to
A=x0
in terms of free variables is
14
xx
=
,
24
xx
=
,
34
xx
=
,
5
0
x
=
,
6
0
x
=
,
with
4
x
free. Thus a basis for Nul A is
238 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1
1
1.
1
0
0
½
ªº
°°
«»
°°
«»
°°
«»
°°
«»
®¾
«»
°°
«»
°°
«»
°°
«»
°°
¬¼
¯¿
5. By the Rank Theorem,
dimNul 7 rank 7 3 4.AA===
Since
dimRow rank ,dimRow 3.AA A==
Since
rank dimCol dimRow ,
TT
AA A==
rank 3.
T
A
=
6. By the Rank Theorem,
dimNul 5 rank 5 2 3.AA===
Since
dimRow rank , dimRow 2.AA A==
Since
rank dimCol dimRow , rank 2.
TT T
AA AA== =
7. Yes, Col A =
4
. Since A has four pivot columns,
dimCol 4.A=
Thus Col A is a four-dimensional
subspace of
4
, and Col A =
4
.
No,
Nul A
3
. It is true that
dimNul 3A=
, but Nul A is a subspace of
7
.
8. Since A has four pivot columns,
rank 4,A=
and
dimNul 8 rank 8 4 4.AA===
No.
Col A
4
. It is true that
dimCol rank 4,AA==
but Col A is a subspace of
6
.
9. Since
dimNul 3, rank 6 dimNul 6 3 3.AA A====
So
dimCol rank 3.AA==
No.
Col A
3
. It is true that
dimCol rank 3,AA==
but Col A is a subspace of
4
.
10. Since
dimNul 5, rank 7 dimNul 7 5 2.AA A====
So
dimCol rank 2.AA==
11. Since
dimNul 3, rank 5 dimNul 5 3 2.AA A====
So
dimRow dimCol rank 2.AAA===
12. Since
dimNul 2, rank 4 dimNul 4 2 2.AA A====
So
dimRow dimCol rank 2.AAA===
13. The rank of a matrix A equals the number of pivot positions which the matrix has. If A is either a
75×
matrix or a
57×
matrix, the largest number of pivot positions that A could have is 5. Thus the
largest possible value for rank A is 5.
14. The dimension of the row space of a matrix A is equal to rank A, which equals the number of pivot
positions which the matrix has. If A is either a
54×
matrix or a
45×
matrix, the largest number of
pivot positions that A could have is 4. Thus the largest possible value for dimRow A is 4.
15. Since the rank of A equals the number of pivot positions which the matrix has, and A could have at
most 3 pivot positions,
rank 3.A
Thus
dimNul 7 rank 7 3 4.AA=−≥=
16. Since the rank of A equals the number of pivot positions which the matrix has, and A could have at
most 5 pivot positions,
rank 5.A
Thus
dimNul 5 rank 5 5 0.AA=−≥=
17. a. True. The rows of A are identified with the columns of
.
T
A
See the paragraph before Example 1.
b . False. See the warning after Example 2.
c . True. See the Rank Theorem.
d . False. See the Rank Theorem.
4.6 Solutions 239
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
e . True. See the Numerical Note before the Practice Problem.
18. a. False. Review the warning after Theorem 6 in Section 4.3.
b . False. See the warning after Example 2.
c . True. See the remark in the proof of the Rank Theorem.
d . True. This fact was noted in the paragraph before Example 4. It also follows from the fact that the
rows of
T
A
are the columns of
() .
TT
AA=
e . True. See Theorem 13.
19. Yes. Consider the system as ,A=x0
where A is a
56×
matrix. The problem states that
dimNul 1A=
. By the Rank Theorem,
rank 6 dimNul 5.AA==
Thus
dim Col rank 5,AA==
and
since Col A is a subspace of
5
, Col A =
5
So every vector b in
5
is also in Col A, and ,A=xb
has
a solution for all b.
20. No. Consider the system as ,A=xb
where A is a
68×
matrix. The problem states that
dimNul 2.A=
By the Rank Theorem,
rank 8 dimNul 6.AA==
Thus
dimCol rank 6,AA==
and
since Col A is a subspace of
6
, Col A =
6
So every vector b in
6
is also in Col A, and
A=xb
has
a solution for all b. Thus it is impossible to change the entries in b to make
A=xb
into an
inconsistent system.
21. No. Consider the system as ,A=xb
where A is a
910×
matrix. Since the system has a solution for
all b in
9
, A must have a pivot in each row, and so
rank 9.A=
By the Rank Theorem,
dimNul 10 9 1.A==
Thus it is impossible to find two linearly independent vectors in Nul A.
22. No. Consider the system as ,A=x0
where A is a
10 12×
matrix. Since A has at most 10 pivot
positions,
rank 10.A
By the Rank Theorem,
dimNul 12 rank 2.AA=−≥
Thus it is impossible to
find a single vector in Nul A which spans Nul A.
23. Yes, six equations are sufficient. Consider the system as ,A=x0
where A is a
12 8×
matrix. The
problem states that
dimNul 2.A=
By the Rank Theorem,
rank 8 dimNul 6.AA==
Thus
dimCol rank 6.AA==
So the system
A=x0
is equivalent to the system ,B=x0
where B is an
echelon form of A with 6 nonzero rows. So the six equations in this system are sufficient to describe
the solution set of
.A=x0
24. Yes, No. Consider the system as ,A=xb
where A is a
76×
matrix. Since A has at most 6 pivot
positions,
rank 6.A
By the Rank Theorem,
dim Nul 6 rank 0.AA=−≥
If
dimNul 0,A=
then the
system
A=xb
will have no free variables. The solution to ,A=xb
if it exists, would thus have to be
unique. Since
rank 6,A
Col A will be a proper subspace of
7
. Thus there exists a b in
7
for
which the system
A=xb
is inconsistent, and the system
A=xb
cannot have a unique solution for
all b.
25. No. Consider the system as ,A=xb
where A is a
10 12×
matrix. The problem states that
dim Nul 3.A=
By the Rank Theorem,
dimCol rank 12 dimNul 9.AA A===
Thus Col A will be a
proper subspace of
10
Thus there exists a b in
10
for which the system
A=xb
is inconsistent, and
the system
A=xb
cannot have a solution for all b.
240 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
26. Consider the system ,A=x0
where A is a mn× matrix with .mn> Since the rank of A is the
number of pivot positions that A has and A is assumed to have full rank,
rank .An=
By the Rank
Theorem,
dimNul rank 0.An A==
So
Nul { },A=
0
and the system
A=x0
has only the trivial
solution. This happens if and only if the columns of A are linearly independent.
27. Since A is an m × n matrix, Row A is a subspace of
n
, Col A is a subspace of
m
, and Nul A is a
subspace of
n
. Likewise since
T
A
is an n × m matrix,
Row
T
A
is a subspace of
m
,
Col
T
A
is a
subspace of
n
, and
Nul
T
A
is a subspace of
m
. Since
Row Col
T
AA=
and
Col Row ,
T
AA=
there
are four dinstict subspaces in the list: Row A, Col A, Nul A, and
Nul .
T
A
28. a. Since A is an m × n matrix and dimRow A = rank A,
dimRow A + dimNul A = rank A + dimNul A = n.
b. Since
T
A
is an n × m matrix and
dimCol dimRow dimCol rank ,
TT
AAAA===
dimCol dimNul rank dimNul .
TT T
AAA Am+=+=
29. Let A be an m × n matrix. The system Ax = b will have a solution for all b in
m
if and only if A has a
pivot position in each row, which happens if and only if dimCol A = m. By Exercise 28 b., dimCol A
= m if and only if
dimNul 0
T
Amm==
, or
Nul { }.
T
A=
0
Finally,
Nul { }
T
A=
0
if and only if the
equation
T
A
=x0
has only the trivial solution.
30. The equation Ax = b is consistent if and only if
[]
rank rankAA=
b
because the two ranks will be
equal if and only if b is not a pivot column of
[]
.A
b
The result then follows from Theorem 2 in
Section 1.2.
31. Compute that
[]
2222
3333.
5555
T
abc
abc a b c
abc
ª
ºª º
«
»« »
==−−
«
»« »
«
»« »
¬
¼¬ ¼
uv
Each column of
T
uv
is a multiple of u,
so
dimCol 1
T
=
uv
, unless a = b = c = 0, in which case
T
uv
is the 3 × 3 zero matrix and
dimCol 0.
T
=
uv
In any case,
rank dimCol 1
TT
=
uv uv
32. Note that the second row of the matrix is twice the first row. Thus if v = (1, –3, 4), which is the first
row of the matrix,
[]
1134
134 .
2268
T
ªº ª º
==
«» « »
¬¼ ¬ ¼
uv
33. Let
[]
123
,A=
uu u
and assume that rank A = 1. Suppose that
1
u0
. Then
1
{}u
is basis for Col
A, since Col A is assumed to be one-dimensional. Thus there are scalars x and y with
21
x
=uu
and
31
y
=uu
, and
1
,
T
A=
uv
where
1
.x
y
ªº
«»
=«»
«»
¬¼
v
4.6 Solutions 241
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
If
1
=u0
but
2
u0
, then similarly
2
{}u
is basis for Col A, since Col A is assumed to be one-
dimensional. Thus there is a scalar x with
32
x
=uu
, and
2
,
T
A=
uv
where
0
1.
x
ªº
«»
=«»
«»
¬¼
v
If
12
==uu 0
but
3
,u0
then
3
,
T
A=
uv
where
0
0.
1
ª
º
«
»
=
«
»
«
»
¬
¼
v
34. Let A be an m × n matrix with of rank r > 0, and let U be an echelon form of A. Since A can be
reduced to U by row operations, there exist invertible elementary matrices
1
,,
p
EE
with
1
().
p
EEAU⋅⋅=
Thus
1
1
(),
p
AE EU
=⋅⋅⋅
since the product of invertible matrices is invertible. Let
1
1
()
p
EE E
=⋅⋅⋅
; then A = EU. Let the columns of E be denoted by
1
,,.
m
cc
Since the rank of A is
r, U has r nonzero rows, which can be denoted
1
,, .
TT
r
dd
By the column-row expansion of A
(Theorem 10 in Section 2.4):
[]
1
111
,
T
TTT
r
mrr
AEU
ªº
«»
«»
«»
«»
== = ++
«»
«»
«»
«»
¬¼
d
d
cc cdcd
0
0
#
#
which is the sum of r rank 1 matrices.
35. [M]
a. Begin by reducing A to reduced echelon form:
1013/20 50 3
0111/20 1/20 2
.
00 01 11/20 7
00 00 01 1
00 00 00 0
A
ªº
«»
«»
«»
«»
«»
«»
¬¼
A basis for Col A is the pivot columns of A, so matrix C contains these columns:
7953
4625
.
5752
3514
6849
C
−−
ªº
«»
−−
«»
«»
=
«»
−−
«»
«»
¬¼
A basis for Row A is the pivot rows of the reduced echelon form of A, so matrix R contains
these rows:
242 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1013/20 50 3
0111/20 1/20 2
.
00 01 11/20 7
00 00 01 1
R
ªº
«»
«»
=«»
«»
«»
¬¼
To find a basis for Nul A row reduce to reduced echelon form, note that the solution to Ax = 0
in terms of free variables is
1357
(13 / 2) 5 3 ,
xxxx
=−−+
2357
(11 / 2) (1 / 2) 2 ,
xxxx
=−−
457
(11 / 2 ) 7 ,
xxx
=
67
,
xx
=
with
3
,
x
5
,
x
and
7
x
free. Thus matrix N is
13/ 2 5 3
11/ 2 1/ 2 2
100
.
011/2 7
010
001
001
N
−−
ªº
«»
−−
«»
«»
«»
=
«»
«»
«»
«»
«»
¬¼
b. The reduced echelon form of
T
A
is
1000 2/11
0100 41/11
0010 0
,
0001 28/11
0000 0
0000 0
0000 0
T
A
ªº
«»
«»
«»
«»
«»
«»
«»
«»
«»
¬¼
so the solution to
T
A
=x0
in terms of free variables is
15
(2/11) ,
xx
=
25
(41/11) ,
xx
=
3
0,
x
=
45
(28/11) ,
xx
=
with
5
x
free. Thus matrix M is
2/11
41/11
.
0
28/11
1
M
ªº
«»
«»
«»
=«»
«»
«»
¬¼
The matrix
T
SR N
ª
º
=
¬
¼ is 7 × 7 because the columns of
T
R
and N are in
7
and dimRow A
+ dimNul A = 7. The matrix
[]
TCM=
is 5 × 5 because the columns of C and M are in
5
and
dimCol dimNul 5.
T
AA+=
Both S and T are invertible because their columns are linearly
independent. This fact will be proven in general in Theorem 3 of Section 6.1.
36. [M] Answers will vary, but in most cases C will be 6 × 4, and will be constructed from the first 4
columns of A. In most cases R will be 4 × 7, N will be 7 × 3, and M will be 6 × 2.
37. [M] The C and R from Exercise 35 work here, and A = CR.
4.7 Solutions 243
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
38. [M] If A is nonzero, then A = CR. Note that
[]
12 n
CR C C C=…rr r
, where
1
r
, ,
n
r
are the
columns of R. The columns of R are either pivot columns of R or are not pivot columns of R.
Consider first the pivot columns of R. The
th
i
pivot column of R is
i
e
, the
th
i
column in the
identity matrix, so
i
C
e
is the
th
i
pivot column of A. Since A and R have pivot columns in the same
locations, when C multiplies a pivot column of R, the result is the corresponding pivot column of A
in its proper location.
Now suppose
j
r
is a nonpivot column of R. Then
j
r
contains the weights needed to construct
the
th
j
column of A from the pivot columns of A, as is discussed in Example 9 of Section 4.3 and in
the paragraph preceding that example. Thus
j
r
contains the weights needed to construct the
th
j
column of A from the columns of C, and
.
jj
C=ra
4.7 SOLUTIONS
Notes:
This section depends heavily on the coordinate systems introduced in Section 4.4. The row
reduction algorithm that produces
cB
P
can also be deduced from Exercise 15 in Section 2.2, by row
reducing
.
CB
PPªº
¬¼
to
1
CB
IP P
ªº
¬¼
. The change-of-coordinates matrix here is interpreted in Section 5.4
as the matrix of the identity transformation relative to two bases.
1. a. Since
112
62=bcc
and
212
94,=bcc
1
6
[] ,
2
C
ª
º
=
«
»
¬
¼
b
2
9
[] ,
4
C
ª
º
=
«
»
¬
¼
b
and
69
.
24
CB
P
ªº
=
«»
−−
¬¼
b. Since
12
32,=+xbb
3
[] 2
B
ªº
=
«»
¬¼
x
and
693 0
[] [] 242 2
CB
CB
P
ª
ºª º ª º
== =
«
»« » « »
−− −
¬
¼¬ ¼ ¬ ¼
xx
2. a. Since
112
24=+bcc
and
212
36,=bcc
1
2
[] ,
4
C
ª
º
=
«
»
¬
¼
b
2
3
[] ,
6
C
ª
º
=
«
»
¬
¼
b
and
23
.
46
CB
P
ªº
=
«»
¬¼
b. Since
12
23,=+xb b
2
[] 3
B
ªº
=
«»
¬¼
x
and
232 5
[] [] 463 10
CB
CB
P
ªºªºªº
== =
«»«»«»
−−
¬¼¬¼¬¼
xx
3. Equation (ii) is satisfied by P for all x in V.
4. Equation (i) is satisfied by P for all x in V.
244 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
5. a. Since
112
4,=abb
2123
,=++abbb
and
32 3
2,=ab b
1
4
[] 1,
0
B
ª
º
«
»
=
«
»
«
»
¬
¼
a
2
1
[] 1,
1
B
ªº
«»
=«»
«»
¬¼
a
3
0
[] 1,
2
B
ªº
«»
=«»
«»
¬¼
a
and
410
111.
012
BA
P
ªº
«»
=
«»
«»
¬¼
b. Since
123
34 ,=+ +xa aa
3
[] 4
1
A
ªº
«»
=«»
«»
¬¼
x
and
4103 8
[] 1 1 1 4 2
0121 2
BBA
P
ª
ºª º ª º
«
»« » « »
===
«
»« » « »
«
»« » « »
¬
¼¬ ¼ ¬ ¼
x
6. a. Since
1123
2,=+fddd
223
3,=+fdd
and
313
32=+fdd
,
1
2
[] 1,
1
D
ª
º
«
»
=
«
»
«
»
¬
¼
f
2
0
[] 3,
1
D
ªº
«»
=«»
«»
¬¼
f
3
3
[] 0,
2
D
ªº
«»
=«»
«»
¬¼
f
and
20 3
13 0.
11 2
DF
P
ªº
«»
=
«»
«»
¬¼
b. Since
123
22,=+xf f f
1
[] 2
2
F
ªº
«»
=
«»
«»
¬¼
x
and
20 3 1 4
[] [] 1 3 0 2 7
11 2 2 3
DF
DF
P
−−
ªºªºªº
«»«»«»
==−−=
«»«»«»
«»«»«»
¬¼¬¼¬¼
xx
7. To find
,
CB
P
row reduce the matrix
[]
12 1 2
cc bb
:
[]
12 1 2
10 3 1
.
01 52
ªº
«»
¬¼
cc bb
Thus
31
,
52
CB
P
ªº
=
«»
¬¼
and
1
21
.
53
BC C B
PP
←←
ª
º
==
«
»
¬
¼
8. To find
CB
P
, row reduce the matrix
[]
12 1 2
cc bb
:
[]
12 1 2
10 9 8
.
01 10 9
ªº
«»
¬¼
cc bb
Thus
98
,
10 9
CB
P
ªº
=
«»
¬¼
and
1
98
.
10 9
BC C B
PP
←←
ª
º
==
«
»
¬
¼
4.7 Solutions 245
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
9. To find
CB
P
, row reduce the matrix
[]
12 1 2
cc bb
:
[]
12 1 2
102 3
.
010 1
ªº
«»
¬¼
cc bb
Thus
23
,
01
CB
P
ªº
=
«»
¬¼
and
1
1/ 2 3/ 2 .
01
BC C B
PP
←←
ª
º
==
«
»
¬
¼
10. To find
CB
P
, row reduce the matrix
[]
12 1 2
cc bb
:
[]
12 1 2
10 3 1
.
01 20
ªº
«»
¬¼
cc bb
Thus
31
,
20
CB
P
ªº
=
«»
¬¼
and
1
01/2
.
13/2
BC C B
PP
←←
ª
º
==
«
»
¬
¼
11. a. False. See Theorem 15.
b. True. See the first paragraph in the subsection “Change of Basis in
n
.”
12. a. True. The columns of
CB
P
are coordinate vectors of the linearly independent set B. See the
second paragraph after Theorem 15.
b. False. The row reduction is discussed after Example 2. The matrix P obtained there satisfies
[] []
CB
P
=xx
13. Let
222
123
{, , }{12 ,35 4,2 3}Btttttt==+++bb b
and let
2
123
{, , }{1,, }.Ctt==cc c
The
C-coordinate vectors of
1
,b
2
,b
and
3
b
are
123
130
[] 2,[ ] 5,[] 2
143
CCC
ªº ªº ªº
«» «» «»
===
«» «» «»
«» «» «»
¬¼ ¬¼ ¬¼
bb b
So
130
252
143
CB
P
ªº
«»
=−−
«»
«»
¬¼
Let x = –1 + 2t. Then the coordinate vector
[]
B
x
satisfies
1
[] [] 2
0
BC
CB
P
ª
º
«
»
==
«
»
«
»
¬
¼
xx
This system may be solved by row reducing its augmented matrix:
1301 1005 5
2522 0102,so[] 2
1430 000 1 1
B
ªºªºªº
«»«»«»
−− =
«»«»«»
«»«»«»
¬¼¬¼¬¼
x
246 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
14. Let
22
123
{, , }{13,2 5,12}Btttt==++bb b
and let
2
123
{, , }{1,, }.Ctt==cc c
The C-coordinate
vectors of
1
b
,
2
b
, and
3
b
are
123
121
[] 0,[ ] 1,[] 2
350
CCC
ªº ªº ªº
«» «» «»
===
«» «» «»
«» «» «»
−−
¬¼ ¬¼ ¬¼
bb b
So
121
012
350
CB
P
ªº
«»
=«»
«»
−−
¬¼
Let
2
.
t
=x
Then the coordinate vector
[]
B
x
satisfies
0
[] [] 0
1
BC
CB
P
ª
º
«
»
==
«
»
«
»
¬
¼
xx
This system may be solved by row reducing its augmented matrix:
1210 100 3 3
0120 0102,so[] 2
3501 000 1 1
B
ªºªºªº
«»«»«»
∼−=
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
x
and
22 2
3(1 3 ) 2(2 5 ) (1 2 ).ttttt=−−+++
15. (a) B is a basis for V
(b) the coordinate mapping is a linear transformation
(c) of the product of a matrix and a vector
(d) the coordinate vector of v relative to B
16. (a)
11
[] []
CB
QQQ
1
1
ªº
«»
0
«»
===
«»
«»
0
«»
¬¼
bb e
#
(b)
[]
kC
b
(c)
[] []
kC kB k
QQ
==bbe
17. [M]
a. Since we found P in Exercise 34 of Section 4.5, we can calculate that
4.7 Solutions 247
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1
32 0 16 0 12 0 10
032 024 020 0
0016016015
1.
00080100
32 0000406
0000020
0000001
P
ª
º
«
»
«
»
«
»
«
»
=
«
»
«
»
«
»
«
»
«
»
¬
¼
b. Since P is the change-of-coordinates matrix from C to B,
1
P
will be the change-of-
coordinates matrix from B to C. By Theorem 15, the columns of
1
P
will be the C-coordinate
vectors of the basis vectors in B. Thus
2
1
cos (1 cos 2 )
2
tt=+
3
1
cos (3cos cos 3 )
4
ttt=+
4
1
cos (3 4 cos 2 cos 4 )
8
ttt=+ +
5
1
cos (10 cos 5cos 3 cos 5 )
16
tttt=++
6
1
cos (10 15cos 2 6cos 4 cos 6 )
32
tttt=+ + +
18. [M] The C-coordinate vector of the integrand is (0, 0, 0, 5, –6, 5, –12). Using
1
P
from the previous
exercise, the B- coordinate vector of the integrand will be
1(0, 0, 0, 5, 6, 5, 12) ( 6, 55/ 8, 69 /8, 45/16, 3, 5/16, 3/8)P
−−=−− − −
Thus the integral may be rewritten as
55 69 45 5 3
6cos cos2 cos33cos4 cos5cos6,
88 16 168
ttttttdt+++
³
which equals
55 69 15 3 1 1
6sin sin2sin3sin4sin5sin6.
816 16 4 16 16
tt t tt t tC++++
19. [M]
a. If C is the basis
123
{, , },vv v
then the columns of P are
1
[],
C
u
2
[],
C
u
and
3
[].
C
u
So
[]
1231
[],
jC
=uvvvu
and
[][]
123 123
.P=uu u vv v
In the current exercise,
[]
123
287121 665
252350 590.
326461 21323
−−− − −−−
ªºªºªº
«»«»«»
=−− =−−
«»«»«»
«»«»«»
¬¼¬¼¬¼
uu u
248 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b. Analogously to part a.,
[][ ]
123 1 2 3
,P=vv v ww w
so
[]
123
=ww w
[]
1
123
.P
vv v
In the current exercise,
[]
1
123
287121
252350
326461
−−− −
ªºªº
«»«»
=−−
«»«»
«»«»
¬¼¬¼
ww w
287585 283821
252353 9137.
326221 3 23
−−−
ªºªºª º
«»«»« »
=−−−=−− −
«»«»« »
«»«»« »
−−− −
¬¼¬¼¬ ¼
20. a.
DB DCCB
PPP
←←
=
Let x be any vector in the two-dimensional vector space. Since
CB
P
is the change-of-coordinates
matrix from B to C and
DC
P
is the change-of-coordinates matrix from C to D,
[] [] and[] [] []
CBDC B
CB DC DCCB
PPPP
←←
===xxx x x
But since
DB
P
is the change-of-coordinates matrix from B to D,
[] []
DB
DB
P
=xx
Thus
[] []
BB
DB DCCB
PPP
←←
=xx
for any vector
[]
B
x
in
2
, and
DB DCCB
PPP
←←
=
b. [M] For example, let 73
,,
51
B½
ªºª º
°°
=®¾
«»« »
°°
¬¼¬ ¼
¯¿
12
,,
52
C
½
ª
ºª º
°
°
=
®
¾
«
»« »
°
°
¬
¼¬ ¼
¯¿
and 11
,.
85
D
½
ªºªº
°
°
=
®
¾
«»«»
°
°
¬¼¬¼
¯¿
Then we
can calculate the change-of-coordinates matrices:
1273 1031 31
5251 0152 52
CB
P
−− −
ªºªºªº
=
«»«»«»
−−− −
¬¼¬¼¬¼
1112 100 8/3 0 8/3
8552 01114/3 114/3
DC
P
−− −
ªºªºªº
=
«»«»«»
−− − −
¬¼¬¼¬¼
1173 1040/316/3 40/316/3
8551 0161/325/3 61/325/3
DB
P
−− −
ªºª ºªº
=
«»« »«»
−− −
¬¼¬ ¼¬¼
One confirms easily that
40 / 3 16 / 3 0 8/ 3 3 1
61/ 3 25 / 3 1 14 / 3 5 2
DB DCCB
PPP
←←
−−
ªºªºªº
== =
«»«»«»
−−
¬¼¬¼¬¼
4.8 Solutions 249
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
4.8 SOLUTIONS
Notes:
This is an important section for engineering students and worth extra class time. To spend only
one lecture on this section, you could cover through Example 5, but assign the somewhat lengthy
Example 3 for reading. Finding a spanning set for the solution space of a difference equation uses the
Basis Theorem (Section 4.5) and Theorem 17 in this section, and demonstrates the power of the theory of
Chapter 4 in helping to solve applied problems. This section anticipates Section 5.7 on differential
equations. The reduction of an
th
n
order difference equation to a linear system of first order difference
equations was introduced in Section 1.10, and is revisited in Sections 4.9 and 5.6. Example 3 is the
background for Exercise 26 in Section 6.5.
1. Let
2.
k
k
y=
Then
21
21
2822(2)8(2)
kk k
kkk
yyy
++
++
+=+
22
2(2 2 8)
k
=+
2(0) 0forall
k
k==
Since the difference equation holds for all k,
2
k
is a solution.
Let
(4)
k
k
y=
. Then
21
21
28(4)2(4)8(4)
kkk
kkk
yyy ++
++
+=+−−
2
(4)((4) 2(4) 8)
k
=−−+−−
(4)(0) 0forall
k
k==
Since the difference equation holds for all k,
(4)
k
is a solution.
2. Let
5.
k
k
y=
Then
2
225 5 25(5 )
kk
kk
yy
+
+
=
2
5(5 25)
k
=
5(0) 0forall
k
k==
Since the difference equation holds for all k,
5
k
is a solution.
Let
(5).
k
k
y=
Then
2
225 ( 5) 25( 5)
kk
kk
yy +
+
=−−
2
(5)((5) 25)
k
=−−
(5)(0) 0forall
k
k==
Since the difference equation holds for all k,
(5)
k
is a solution.
250 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
3. The signals
2
k
and
(4)
k
are linearly independent because neither is a multiple of the other; that is,
there is no scalar c such that
2(4)
kk
c=
for all k. By Theorem 17, the solution set H of the
difference equation
21
280
kkk
yyy
++
+=
is two-dimensional. By the Basis Theorem, the two
linearly independent signals
2
k
and
(4)
k
form a basis for H.
4. The signals
5
k
and
(5)
k
are linearly independent because neither is a multiple of the other; that is,
there is no scalar c such that
5(5)
kk
c=
for all k. By Theorem 17, the solution set H of the
difference equation
2
25 0
kk
yy
+
=
is two-dimensional. By the Basis Theorem, the two linearly
independent signals
5
k
and
(5)
k
form a basis for H.
5. Let
(2).
k
k
y=
Then
21
21
44(2)4(2)4(2)
kkk
kkk
yyy ++
++
++=++
2
(2)((2) 4(2) 4)
k
=−−++
(2)(0) 0forall
k
k==
Since the difference equation holds for all k,
(2)
k
is in the solution set H.
Let
(2).
k
k
yk=
Then
21
21
44(2)(2)4(1)(2)4(2)
kkk
kkk
yyyk k k
++
++
++=++++
2
(2)(( 2)(2) 4( 1)(2) 4)
kkkk=++++
(2)(4 8 8 8 4)
k
kkk=+−−+
(2)(0) 0forall
k
k==
Since the difference equation holds for all k,
(2)
k
k
is in the solution set H.
The signals
(2)
k
and
(2)
k
k
are linearly independent because neither is a multiple of the other;
that is, there is no scalar c such that
(2) (2)
kk
ck=
for all k and there is no scalar c such that
(2) (2)
kk
ck=
for all k . By Theorem 17, dim H = 2, so the two linearly independent signals
(2)
k
and
(2)
k
k
form a basis for H by the Basis Theorem.
6. Let
2
4cos .
kk
k
y
π
=
Then
2
2
(2)
16 4 cos 16 4 cos
22
kk
kk
kk
yy
ππ
+
+
+§·
+= +
¨¸
©¹
2
(2)
44cos 16cos
22
k
kk
ππ
+
§·
=+
¨¸
©¹
16 4 cos cos
22
k
kk
ππ
π
§·
§·
=++
¨¸¨¸
©¹
©¹
16 4 (0) 0 for all
k
k==
since cos(t +
π
) = –cos t for all t. Since the difference equation holds for all k,
2
4cos
kk
π
is in the
solution set H.
4.8 Solutions 251
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Let
2
4sin .
kk
k
y
π
= Then
2
2
(2)
16 4 sin 16 4 sin
22
kk
kk
kk
yy
ππ
+
+
+§·
+= +
¨¸
©¹
2
(2)
44sin 16sin
22
k
kk
ππ
+
§·
=+
¨¸
©¹
16 4 sin sin
22
k
kk
ππ
π
§·
§·
=++
¨¸¨¸
©¹
©¹
16 4 (0) 0 for all
k
k==
since sin(t +
π
) = –sin t for all t. Since the difference equation holds for all k,
2
4sin
kk
π
is in the
solution set H.
The signals
2
4cos
kk
π
and
2
4sin
kk
π
are linearly independent because neither is a multiple of the
other. By Theorem 17, dim H = 2, so the two linearly independent signals
2
4cos
kk
π
and
2
4sin
kk
π
form a basis for H by the Basis Theorem.
7. Compute and row reduce the Casorati matrix for the signals
1,
k
2,
k
and
(2)
k
, setting k = 0 for
convenience:
00 0
11 1
22 2
12(2) 100
12(2) 010
001
12(2)
ªº
ªº
«»
«»
−∼
«»
«»
«»
«»
¬¼
«»
¬¼
This Casorati matrix is row equivalent to the identity matrix, thus is invertible by the IMT. Hence the
set of signals
{1 , 2 , ( 2 ) }
kk k
is linearly independent in . The exercise states that these signals are in
the solution set H of a third-order difference equation. By Theorem 17, dim H = 3, so the three
linearly independent signals
1,
k
2,
k
(2)
k
form a basis for H by the Basis Theorem.
8. Compute and row reduce the Casorati matrix for the signals
(1),
k
2,
k
and
3,
k
setting k = 0 for
convenience:
000
111
222
(1) 2 3 100
(1) 2 3 0 1 0
001
(1) 2 3
ªº
ªº
«»
«»
−∼
«»
«»
«»
«»
¬¼
«»
¬¼
This Casorati matrix is row equivalent to the identity matrix, thus is invertible by the IMT. Hence the
set of signals
{( 1) , 2 ,3 }
kkk
is linearly independent in . The exercise states that these signals are in
the solution set H of a third-order difference equation. By Theorem 17, dim H = 3, so the three
linearly independent signals
(1),
k
2,
k
and 3
k
form a basis for H by the Basis Theorem.
252 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
9. Compute and row reduce the Casorati matrix for the signals
2,
k
2
5cos ,
kk
and
2
5sin
kk
setting k = 0
for convenience:
00 0
11 1
22
22 2
25cos05sin0 100
25cos 5sin 010
001
25cos 5sin
ππ
ππ
ªº
ªº
«»
«»
«»
«»
«»
«»
¬¼
«»
¬¼
This Casorati matrix is row equivalent to the identity matrix, thus is invertible by the IMT. Hence the
set of signals
22
{2 ,5 cos ,5 sin }
kk k
kk
ππ
is linearly independent in . The exercise states that these
signals are in the solution set H of a third-order difference equation. By Theorem 17, dim H = 3, so
the three linearly independent signals
2,
k
2
5cos ,
kk
and
2
5sin
kk
form a basis for H by the Basis
Theorem.
10. Compute and row reduce the Casorati matrix for the signals
(2),
k
(2),
k
k
and
3k
setting k = 0 for
convenience:
000
111
222
(2) 0(2) 3 100
(2) 1(2) 3 0 1 0
001
(2) 2(2) 3
ªº
−− ªº
«»
«»
−− ∼
«»
«»
«»
«»
−− ¬¼
«»
¬¼
This Casorati matrix is row equivalent to the identity matrix, thus is invertible by the IMT. Hence the
set of signals
{( 2) , ( 2) , 3 }
kkk
k−−
is linearly independent in . The exercise states that these signals
are in the solution set H of a third-order difference equation. By Theorem 17, dim H = 3, so the three
linearly independent signals
(2),
k
(2),
k
k
and
3k
form a basis for H by the Basis Theorem.
11. The solution set H of this third-order difference equation has dim H = 3 by Theorem 17. The two
signals
(1)
k
and
2
k
cannot possibly span a three-dimensional space, and so cannot be a basis for
H.
12. The solution set H of this fourth-order difference equation has dim H = 4 by Theorem 17. The two
signals
3k
and
(2)
k
cannot possibly span a four-dimensional space, and so cannot be a basis for H.
13. The auxiliary equation for this difference equation is
22/9 0.rr+=
By the quadratic formula
(or factoring), r = 2/3 or r = 1/3, so two solutions of the difference equation are
(2/3)
k
and
(1 / 3)
k
.
The signals
(2/3)
k
and
(1 / 3)
k
are linearly independent because neither is a multiple of the other.
By Theorem 17, the solution space is two-dimensional, so the two linearly independent signals
(2/3)
k
and
(1 / 3)
k
form a basis for the solution space by the Basis Theorem.
14. The auxiliary equation for this difference equation is
2560.rr+=
By the quadratic formula (or
factoring), r = 2 or r = 3, so two solutions of the difference equation are
2
k
and
3.
k
The signals
2
k
and
3k
are linearly independent because neither is a multiple of the other. By Theorem 17, the
solution space is two-dimensional, so the two linearly independent signals
2
k
and
3k
form a basis for
the solution space by the Basis Theorem.
4.8 Solutions 253
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
15. The auxiliary equation for this difference equation is
2
620.rr+=
By the quadratic formula (or
factoring), r = 1/2 or r = –2/3, so two solutions of the difference equation are
(1 / 2 )
k
and
(2/3).
k
The signals
(1 / 2 )
k
and
(2/3)
k
are linearly independent because neither is a multiple of the other.
By Theorem 17, the solution space is two-dimensional, so the two linearly independent signals
(1 / 2 )
k
and
(2/3)
k
form a basis for the solution space by the Basis Theorem.
16. The auxiliary equation for this difference equation is
225 0.r=
By the quadratic formula (or
factoring), r = 5 or r = –5, so two solutions of the difference equation are
5k
and
(5).
k
The signals
5k
and
(5)
k
are linearly independent because neither is a multiple of the other. By Theorem 17, the
solution space is two-dimensional, so the two linearly independent signals
5k
and
(5)
k
form a basis
for the solution space by the Basis Theorem.
17. Letting a = .9 and b = 4/9 gives the difference equation
21
1.3 .4 1.
kkk
YYY
++
+=
First we find a
particular solution
k
YT=
of this equation, where T is a constant. The solution of the equation T
1.3T + .4T = 1 is T = 10, so 10 is a particular solution to
21
1.3 .4 1
kkk
YYY
++
+=
. Next we solve the
homogeneous difference equation
21
1.3 .4 0.
kkk
YYY
++
+=
The auxiliary equation for this difference
equation is
21.3 .4 0.rr+=
By the quadratic formula (or factoring), r = .8 or r = .5, so two
solutions of the homogeneous difference equation are
.8k
and
.5 .
k
The signals
(.8)
k
and
(.5)
k
are
linearly independent because neither is a multiple of the other. By Theorem 17, the solution space is
two-dimensional, so the two linearly independent signals
(.8)
k
and
(.5)
k
form a basis for the
solution space of the homogeneous difference equation by the Basis Theorem. Translating the
solution space of the homogeneous difference equation by the particular solution 10 of the
nonhomogeneous difference equation gives us the general solution of
21
1.3 .4 1
kkk
YYY
++
+=
:
12
(.8) (.5) 10.
kk
k
Yc c=+ +
As k increases the first two terms in the solution approach 0, so
k
Y
approaches 10.
18. Letting a = .9 and b = .5 gives the difference equation
21
1.35 .45 1.
kkk
YYY
++
+=
First we find a
particular solution
k
YT=
of this equation, where T is a constant. The solution of the equation
T – 1.35T + .45T = 1 is T = 10, so 10 is a particular solution to
21
1.35 .45 1
kkk
YYY
++
+=
. Next we
solve the homogeneous difference equation
21
1.35 .45 0.
kkk
YYY
++
+=
The auxiliary equation for
this difference equation is
21.35 .45 0.rr+=
By the quadratic formula (or factoring), r = .6 or
r = .75, so two solutions of the homogeneous difference equation are
.6k
and
.75 .
k
The signals
(.6)
k
and
(.75)
k
are linearly independent because neither is a multiple of the other. By Theorem 17, the
solution space is two-dimensional, so the two linearly independent signals
(.6)
k
and
(.75)
k
form a
basis for the solution space of the homogeneous difference equation by the Basis Theorem.
Translating the solution space of the homogeneous difference equation by the particular solution 10
of the nonhomogeneous difference equation gives us the general solution of
21
1.35 .45 1
kkk
YYY
++
+=
:
12
(.6) (.75) 10.
kk
k
Yc c=+ +
19. The auxiliary equation for this difference equation is
2410.rr++=
By the quadratic formula,
23r=+
or
23,r=−−
so two solutions of the difference equation are
(2 3)
k
+
and
254 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
(2 3).
k
−−
The signals
(2 3)
k
+
and
(2 3)
k
−−
are linearly independent because neither is a
multiple of the other. By Theorem 17, the solution space is two-dimensional, so the two linearly
independent signals
(2 3)
k
+
and
(2 3)
k
−−
form a basis for the solution space by the Basis
Theorem. Thus a general solution to this difference equation is
12
(2 3) (2 3).
kk
k
yc c=++−−
20. Let
23a=+
and
23b=−−
. Using the solution from the previous exercise, we find that
11 2 5000ycacb=+ =
and
12
0.
NN
N
ycacb=+ =
This is a system of linear equations with variables
1
c
and
2
c
whose augmented matrix may be row reduced:
5000
10
5000
05000
01
N
NN
NN N
NN
b
ab ba ab
ab a
ba ab
ªº
«»
ªº
«»
«»
«»
¬¼
«»
¬¼
so
12
5000 5000
,
NN
NN NN
ba
cc
ba ab ba ab
==
−−
(Alternatively, Cramer’s Rule may be applied to get the same solution). Thus
12
kk
k
ycacb=+
5000( )
kN Nk
NN
ab a b
ba ab
=
21. The smoothed signal
k
z has the following values:
1
(9 5 7) / 3 7,z=++ =
2
(5 7 3) / 3 5,z=++ =
3
(7 3 2) / 3 4,z=++ =
4
(3 2 4)/3 3,z=++ =
5
(2 4 6)/ 3 4,z=++ =
6
(4 6 5)/3 5,z=++ =
7
(6 5 7) / 3 6,z=++ =
8
(5 7 6) / 3 6,z=++ =
9
(7 6 8) / 3 7,z=++ =
10
(6 8 10)/ 3 8,z=++ =
11
(8 10 9) / 3 9,z=+ + =
12
(10 9 5) / 3 8,z=++ =
13
(9 5 7) / 3 7.z=++ =
22. a. The smoothed signal
k
z has the following values:
0210
.35 .5 .35 .35(0) .5(.7) .35(3) 1.4,zyyy=++= ++ =
1321
.35 .5 .35 .35( .7) .5(0) .35(.7) 0,zyyy=++=++ =
2432
.35 .5 .35 .35( .3) .5( .7) .35(0) 1.4,zyyy=++=++=
3543
.35 .5 .35 .35( .7) .5( .3) .35( .7) 2,zyyy=++=++=
4654
.35 .5 .35 .35(0) .5( .7) .35( .3) 1.4,zyyy=++= ++=
5765
.35 .5 .35 .35(.7) .5(0) .35( .7) 0,zyyy=++= ++=
2 4 6 8 10 12 14
2
4
6
8
10
original data
smoothed data
4.8 Solutions 255
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
6876
.35 .5 .35 .35(3) .5(.7) .35(0) 1.4,zyyy=++= ++ =
7987
.35 .5 .35 .35(.7) .5(3) .35(.7) 2,zyyy=++= ++ =
81098
.35 .5 .35 .35(0) .5(.7) .35(3) 1.4,zyyy=++=++=
b. This signal is two times the signal output by the filter when the input (in Example 3) was
y = cos(
π
t/4). This is expected because the filter is linear. The output from the input
2cos(
π
t/4) + cos(3
π
t/4) should be two times the output from cos(
π
t/4) plus the output from
cos(3
π
t/4) (which is zero).
23. a.
1
1.01 450,
kk
yy
+
=
0
10,000.y=
b. [M] MATLAB code to create the table:
p a y = 450, y = 10000, m = 0, table = [0;y]
while y>450
y = 1.01*y-pay
m = m+1
t a b l e = [table [m;y]]
e n d
m , y
Mathematica code to create the table:
pay = 450; y = 10000; m = 0; balancetable = {{0, y}};
W h i l e [ y > 4 5 0 , { y = 1.01*y - pay; m = m + 1,
A p p e n d T o [ b a l a n c e t a b l e , { m , y } ] } ] ;
m
y
c. [M] At the start of month 26, the balance due is $114.88. At the end of this month the unpaid
balance will be (1.01)($114.88)=$116.03. The final payment will thus be $116.03. The total paid
by the borrower is (25)($450.00)+$116.03=$11,366.03.
24. a.
1
1.005 200,
kk
yy
+
=
0
1, 000.y=
b. [M] MATLAB code to create the table:
pay = 200, y = 1000, m = 0, table = [0;y]
f o r m = 1: 60
y = 1.005*y+pay
t a b l e = [table [m;y]]
e n d
i n t e r e s t = y-60*pay-1000
Mathematica code to create the table:
pay = 200; y = 1000; amounttable = {{0, y}};
D o [ { y = 1.005*y + pay;
A p p e n d T o [ a m o u n t t a b l e , { m , y } ] } , { m , 1 , 6 0 } ] ;
i n t e r e s t = y-60*pay-1000
256 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
c. [M] The total is $6213.55 at k = 24, $12,090.06 at k = 48, and $15,302.86 at k = 60. When k = 60,
the interest earned is $2302.86.
25. To show that
2
k
yk=
is a solution of
21
34107,
kkk
yyyk
++
+=+
substitute
2
,
k
yk=
2
1
(1),
k
yk
+
=+
and
2
2
(2):
k
yk
+
=+
222
21
34(2)3(1)4
kk k
yykkk
++
+=+ + +
222
(44)3(21)4kk kk k=+++ ++
222
443 634kk kk k=+++ ++
10 7 for allkk=+
The auxiliary equation for the homogeneous difference equation
21
340
kkk
yyy
++
+= is
2
340.rr+= By the quadratic formula (or factoring), r = –4 or r = 1, so two solutions of the
difference equation are
(4)
k
and 1.
k
The signals
(4)
k
and
1k
are linearly independent because
neither is a multiple of the other. By Theorem 17, the solution space is two-dimensional, so the two
linearly independent signals
(4)
k
and
1k
form a basis for the solution space of the homogeneous
difference equation by the Basis Theorem. The general solution to the homogeneous difference
equation is thus
1212
(4) 1 (4) .
kk k
cccc+=+
Adding the particular solution
2
k of the
nonhomogeneous difference equation, we find that the general solution of the difference equation
21
34107
kkk
yyyk
++
+=+
is
2
12
(4) .
k
k
ykc c=++
26. To show that 1
k
yk=+ is a solution of
21
654,
kkk
yyy
++
+= substitute 1
k
yk=+ ,
1
1( 1) 2 ,
k
yk k
+
=+ + = + and
2
1( 2) 3
k
yk k
+
=+ + = + :
21
65(3)6(2)5(1)
kkk
yyy k k k
++
+=+++ +
312655kkk=+ −− ++
4forallk=
The auxiliary equation for the homogeneous difference equation
21
650
kkk
yyy
++
+=
is
2
650.rr+= By the quadratic formula (or factoring), r = 1 or r = 5, so two solutions of the
difference equation are
1k
and 5.
k
The signals
1k
and 5
k
are linearly independent because neither
is a multiple of the other. By Theorem 17, the solution space is two-dimensional, so the two linearly
independent signals
1k
and 5
k
form a basis for the solution space of the homogeneous difference
equation by the Basis Theorem. The general solution to the homogeneous difference equation is thus
12
15.
kk
cc+
Adding the particular solution 1k+ of the nonhomogeneous difference equation, we
find that the general solution of the difference equation
21
654
kkk
yyy
++
+= is
12
115.
kk
k
ykcc=+ + +
27. To show that 2
k
yk= is a solution of
2
483
kk
yy k
+
=, substitute 2
k
yk= and
2
(2)2
k
yk k
+
=+=:
2
44(2)4883forall
kk
yykkkk k k
+
=−−=+=
The auxiliary equation for the homogeneous difference equation
2
40
kk
yy
+
= is
2
40.r= By the
quadratic formula (or factoring), r = 2 or r =
2, so two solutions of the difference equation are
2k
and
(2).
k
The signals
2k
and
(2)
k
are linearly independent because neither is a multiple of the
4.8 Solutions 257
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
other. By Theorem 17, the solution space is two-dimensional, so the two linearly independent signals
2k
and
(2)
k
form a basis for the solution space of the homogeneous difference equation by the
Basis Theorem. The general solution to the homogeneous difference equation is thus
12
2(2).
kk
cc+⋅−
Adding the particular solution 2k of the nonhomogeneous difference equation,
we find that the general solution of the difference equation
2
483
kk
yy k
+
= is
12
22 (2).
kk
k
yk c c=++⋅−
28. To show that 12
k
yk=+ is a solution of
2
25 30 52 ,
kk
yy k
+
+=+
substitute 12
k
yk=+ and
2
12( 2)52
k
yk k
+
=+ + = + :
2
25 5 2 25(1 2 ) 5 2 25 50 30 52 for all
kk
yyk kk k kk
+
+=+++=+++=+
The auxiliary equation for the homogeneous difference equation
2
25 0
kk
yy
+
+=
is
2
25 0.r+= By
the quadratic formula (or factoring), r = 5i or r = –5i, so two solutions of the difference equation are
22
5cos and 5sin
kk
kk
. The signals
22
5cos and 5sin
kk
kk
are linearly independent because neither is
a multiple of the other. By Theorem 17, the solution space is two-dimensional, so the two linearly
independent signals
22
5cos and 5sin
kk
kk
form a basis for the solution space of the homogeneous
difference equation by the Basis Theorem. The general solution to the homogeneous difference
equation is thus
12
22
5cos 5sin .
kk
kk
cc+
Adding the particular solution 12k+of the
nonhomogeneous difference equation, we find that the general solution of the difference equation
225 30 52
kk
yy k
++=+
is
12
22
12 5cos 5sin .
kk
kk
k
ykc c=+ + +
29. Let
1
2
3
k
k
k
k
k
y
y
y
y
+
+
+
ªº
«»
«»
=«»
«»
¬¼
x
. Then
1
21
1
32
43
0100
0010 .
0001
2683
kk
kk
kk
kk
kk
yy
yy
A
yy
yy
+
++
+
++
++
ªº ªº
ªº
«» «»
«»
«» «»
«»
== =
«» «»
«»
«» «»
«»
−−
¬¼
¬¼ ¬¼
xx
30. Let
1
2
k
kk
k
y
y
y
+
+
ªº
«»
=«»
«»
¬¼
x
. Then
1
12 1
32
010
001 .
805
kk
kk k k
kk
yy
yyA
yy
+
++ +
++
ªº ªº
ªº
«» «»«»
== =
«» «»
«»
«» «»«»
¬¼¬¼ ¬¼
xx
31. The difference equation is of order 2. Since the equation
321
560
kkk
yyy
++ +
++=
holds for all k,
it holds if k is replaced by k 1. Performing this replacement transforms the equation into
21
560,
kkk
yyy
++
++=
which is also true for all k. The transformed equation has order 2.
32. The order of the difference equation depends on the values of
1,a
2,a
and
3.a
If
30,a
then the
order is 3. If
30a=
and
20,a
then the order is 2. If
32
0aa==
and
10,a
then the order is 1.
If
321
0,aaa===
then the order is 0, and the equation has only the zero signal for a solution.
33. The Casorati matrix C(k) is
2
2
11
2| |
() (1) 2(1)| 1|
kk
kk
yz kkk
Ck yz kkk
++
ªº
ªº
==
«»
«»
+++
«»¬¼
¬¼
In particular,
258 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
00 1 2 4 8
(0) , ( 1) , and ( 2)
12 0 0 1 2
CC C
−−
ªº ª º ª º
===
«» « » « »
¬¼ ¬ ¼ ¬ ¼
none of which are invertible. In fact, C(k) is not invertible for all k, since
()
22
det ( ) 2 ( 1) | 1| 2( 1) | | 2 ( 1) | 1| ( 1) | |Ck k k k k k k kk k k k k=+++=+++
If k = 0 or k = –1, det C(k) = 0. If k > 0, then k + 1 > 0 and k| k + 1 | – (k + 1)| k | = k(k + 1) – (k + 1)k
= 0, so det C(k) = 0. If k < –1, then k + 1 < 0 and k| k + 1 | – (k + 1)| k | = –k(k + 1) + (k + 1)k = 0, so
det C(k) = 0. Thus detC(k)=0 for all k, and C(k) is not invertible for all k. Since C(k) is not invertible
for all k, it provides no information about whether the signals
{}
k
y
and
{}
k
z
are linearly dependent
or linearly independent. In fact, neither signal is a multiple of the other, so the signals
{}
k
y
and
{}
k
z
are linearly independent.
34. No, the signals could be linearly dependent, since the vector space V of functions considered on the
entire real line is not the vector space of signals. For example, consider the functions f (t) = sinπt,
g(t) = sin 2πt, and h(t) = sin 3πt. The functions f, g, and h are linearly independent in V since they
have different periods and thus no function could be a linear combination of the other two. However,
sampling the functions at any integer n gives f (n) = g(n) = h(n) = 0, so the signals are linearly
dependent in .
35. Let
{}
k
y
and
{}
k
z
be in , and let r be any scalar. The
th
k
term of
{}{}
kk
yz+
is
,
kk
yz+
while the
th
k
term of
{}
k
ry
is
.
k
ry
Thus
({ } { }) { }
kk kk
Ty z Ty z+=+
22 11
()()()
kk kk kk
yz ayz byz
++ ++
=++ +++
21 21
()()
kkkkkk
yaybyzazbz
++ ++
=+++++
{} {},and
kk
Ty Tz=+
({ }) { }
kk
Try Try=
21
()()
kk k
ry a ry b ry
++
=+ +
21
()
kkk
ry ay by
++
=++
{}
k
rT y=
so T has the two properties that define a linear transformation.
36. Let z be in V, and suppose that
p
x
in V satisfies
() .
p
T=xz
Let u be in the kernel of T; then T(u) =
0. Since T is a linear transformation,
()()() ,
pp
TTT+= + =+=ux u x 0zz
so the vector
p
=+xux
satisfies the nonhomogeneous equation T(x) = z.
37. We compute that
012 012 012 012
()(,,,) ((,,,)) (0,,,,)(,,,)TD y y y T D y y y T y y y y y y…= … = …=
while
012 012 123 123
()(,,,) ((,,,)) (,,,)(0,,,,)DT y y y D T y y y D y y y y y y…= … = …=
Thus TD = I (the identity transformation on
0
), while DT I.
4.9 Solutions 259
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
4.9 SOLUTIONS
Notes:
This section builds on the population movement example in Section 1.10. The migration matrix is
examined again in Section 5.2, where an eigenvector decomposition shows explicitly why the sequence of
state vectors
k
x
tends to a steady state vector. The discussion in Section 5.2 does not depend on prior
knowledge of this section.
1. a. Let N stand for “News” and M stand for “Music.” Then the listeners’ behavior is given by the
table
From:
N M To:
.7 .6 N
.3 .4 M
so the stochastic matrix is
.7 .6 .
.3 .4
Pªº
=«»
¬¼
b. Since 100% of the listeners are listening to news at 8: 15, the initial state vector is
0
1
0
ªº
=«»
¬¼
x
.
c. There are two breaks between 8: 15 and 9: 25, so we calculate
2
x
:
10
.7 .6 1 .7
.3 .4 0 .3
P
ª
ºª º ª º
== =
«
»« » « »
¬
¼¬ ¼ ¬ ¼
xx
21
.7 .6 .7 .67
.3 .4 .3 .33
P
ª
ºª º ª º
== =
«
»« » « »
¬
¼¬ ¼ ¬ ¼
xx
Thus 33% of the listeners are listening to music at 9:25.
2. a. Let the foods be labelled “1,” “2,” and “3.” Then the animals’ behavior is given by the table
From:
1 2 3 To:
.6 .2 .2 1
.2 .6 .2 2
.2 .2 .6 3
so the stochastic matrix is
.6 .2 .2
.2 .6 .2
.2 .2 .6
P
ªº
«»
=«»
«»
¬¼
.
b. There are two trials after the initial trial, so we calculate
2
x
. The initial state vector is
1
0.
0
ªº
«»
«»
«»
¬¼
10
.6 .2 .2 1 .6
.2 .6 .2 0 .2
.2 .2 .6 0 .2
P
ª
ºª º ª º
«
»« » « »
== =
«
»« » « »
«
»« » « »
¬
¼¬ ¼ ¬ ¼
xx
260 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
21
.6 .2 .2 .6 .44
.2 .6 .2 .2 .28
.2 .2 .6 .2 .28
P
ª
ºª º ª º
«
»« » « »
== =
«
»« » « »
«
»« » « »
¬
¼¬ ¼ ¬ ¼
xx
Thus the probability that the animal will choose food #2 is .28.
3. a. Let H stand for “Healthy” and I stand for “Ill.” Then the students’ conditions are given by the
table
From:
H I To:
.95 .45 H
.05 .55 I
so the stochastic matrix is
.95 .45
.05 .55
Pªº
=«»
¬¼
.
b. Since 20% of the students are ill on Monday, the initial state vector is
0
.8
.2
ªº
=«»
¬¼
x
. For Tuesday’s
percentages, we calculate
1
x
; for Wednesday’s percentages, we calculate
2
x
:
10
.95 .45 .8 .85
.05 .55 .2 .15
P
ª
ºª º ª º
== =
«
»« » « »
¬
¼¬ ¼ ¬ ¼
xx
21
.95 .45 .85 .875
.05 .55 .15 .125
P
ª
ºª º ª º
== =
«
»« » « »
¬
¼¬ ¼ ¬ ¼
xx
Thus 15% of the students are ill on Tuesday, and 12.5% are ill on Wednesday.
c. Since the student is well today, the initial state vector is
0
1.
0
ª
º
=
«
»
¬
¼
x
We calculate
2
x
:
10
.95 .45 1 .95
.05 .55 0 .05
P
ª
ºª º ª º
== =
«
»« » « »
¬
¼¬ ¼ ¬ ¼
xx
21
.95 .45 .95 .925
.05 .55 .05 .075
P
ª
ºª º ª º
== =
«
»« » « »
¬
¼¬ ¼ ¬ ¼
xx
Thus the probability that the student is well two days from now is .925.
4. a. Let G stand for good weather, I for indifferent weather, and B for bad weather. Then the change
in the weather is given by the table
From:
G I B To:
.4 .5 .3 G
.3 .2 .4 I
.3 .3 .3 B
so the stochastic matrix is
.4 .5 .3
.3 .2 .4 .
.3 .3 .3
P
ªº
«»
=«»
«»
¬¼
4.9 Solutions 261
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b. The initial state vector is
.5
.5 .
0
ªº
«»
«»
«»
¬¼
We calculate
1
x
:
10
.4 .5 .3 .5 .45
.3 .2 .4 .5 .25
.3 .3 .3 0 .30
P
ª
ºª º ª º
«
»« » « »
== =
«
»« » « »
«
»« » « »
¬
¼¬ ¼ ¬ ¼
xx
Thus the chance of bad weather tomorrow is 30%.
c. The initial state vector is
0
0
.6 .
.4
ªº
«»
=«»
«»
¬¼
x
We calculate
2
x
:
10
.4 .5 .3 0 .42
.3 .2 .4 .6 .28
.3 .3 .3 .4 .30
P
ª
ºª º ª º
«
»« » « »
== =
«
»« » « »
«
»« » « »
¬
¼¬ ¼ ¬ ¼
xx
21
.4 .5 .3 .42 .398
.3 .2 .4 .28 .302
.3 .3 .3 .30 .300
P
ª
ºª º ª º
«
»« » « »
== =
«
»« » « »
«
»« » « »
¬
¼¬ ¼ ¬ ¼
xx
Thus the chance of good weather on Wednesday is 39.8%, or approximately 40%.
5. We solve Px = x by rewriting the equation as (P I)x = 0, where
.9 .5 .
.9 .5
PI
ª
º
=
«
»
¬
¼
Row reducing
the augmented matrix for the homogeneous system (P I)x = 0 gives
.9 .5 0 1 5 / 9 0
.9 .5 0 0 0 0
−−
ªºªº
«»«»
¬¼¬¼
Thus
1
2
2
5/9 ,
1
x
x
x
ªº ªº
==
«» «»
¬¼
¬¼
x
and one solution is
5.
9
ª
º
«
»
¬
¼
Since the entries in
5
9
ª
º
«
»
¬
¼
sum to 14, multiply by
1/14 to obtain the steady-state vector
5/14 .
9/14
ª
º
=
«
»
¬
¼
q
6. We solve Px = x by rewriting the equation as (P I)x = 0, where
.6 .8 .
.6 .8
PI
ª
º
=
«
»
¬
¼
Row reducing
the augmented matrix for the homogeneous system (P I)x = 0 gives
.6 .8 0 1 4 / 3 0
.6 .8 0 0 0 0
−−
ªºªº
«»«»
¬¼¬¼
Thus
1
2
2
4/3 ,
1
x
x
x
ªº ªº
==
«» «»
¬¼
¬¼
x
and one solution is
4.
3
ª
º
«
»
¬
¼
Since the entries in
4
3
ª
º
«
»
¬
¼
sum to 7, multiply by
1/7 to obtain the steady-state vector
4/7 .571 .
3/7 .429
ª
ºª º
=
«
»« »
¬
¼¬ ¼
q
262 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
7. We solve Px = x by rewriting the equation as (P I)x = 0, where
.3 .1 .1
.2 .2 .2 .
.1 .1 .3
PI
ª
º
«
»
=
«
»
«
»
¬
¼
Row
reducing the augmented matrix for the homogeneous system (P I)x = 0 gives
.3 .1 .1 0 1 0 1 0
.2 .2 .2 0 0 1 2 0
.1 .1 .3 0 0 0 0 0
−−
ªºªº
«»«»
−∼
«»«»
«»«»
¬¼¬¼
Thus
1
23
3
1
2,
1
x
xx
x
ªº ªº
«» «»
==
«» «»
«» «»
¬¼ ¬¼
x
and one solution is
1
2.
1
ª
º
«
»
«
»
«
»
¬
¼
Since the entries in
1
2
1
ª
º
«
»
«
»
«
»
¬
¼
sum to 4, multiply by 1/4
to obtain the steady-state vector
1/ 4 .25
1/ 2 .5 .
1/ 4 .25
ªºªº
«»«»
==
«»«»
«»«»
¬¼¬¼
q
8. We solve Px = x by rewriting the equation as (P I)x = 0, where
.6 .5 .8
0.5.1.
.6 0 .9
PI
ª
º
«
»
=
«
»
«
»
¬
¼
Row
reducing the augmented matrix for the homogeneous system (P I)x = 0 gives
.6 .5 .8 0 1 0 3 / 2 0
0.5.10 011/50
.6 0 .9 0 0 0 0 0
−−
ªºªº
«»«»
−∼
«»«»
«»«»
¬¼¬¼
Thus
1
23
3
3/2
1/5 ,
1
x
xx
x
ªº ªº
«» « »
==
«» «»
«» « »
¬¼¬¼
x
and one solution is
15
2.
10
ª
º
«
»
«
»
«
»
¬
¼
Since the entries in
15
2
10
ª
º
«
»
«
»
«
»
¬
¼
sum to 27, multiply
by 1/27 to obtain the steady-state vector
15 / 27 .556
2/27 .074 .
10 / 27 .370
ª
ºª º
«
»« »
=
«
»« »
«
»« »
¬
¼¬ ¼
q
9. Since
2.84 .2
.16 .8
Pªº
=«»
¬¼
has all positive entries, P is a regular stochastic matrix.
10. Since
11.7
0.7
k
k
k
P
ªº
=«»
«»
¬¼
will have a zero as its (2,1) entry for all k, P is not a regular stochastic
matrix.
11. a. From Exercise 1,
.7 .6 ,
.3 .4
Pªº
=«»
¬¼
so
.3 .6 .
.3 .6
PI
ª
º
=
«
»
¬
¼
Solving (P I)x = 0 by row reducing the
augmented matrix gives
4.9 Solutions 263
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
.3 .6 0 1 2 0
.3 .6 0 0 0 0
−−
ªºªº
«»«»
¬¼¬¼
Thus
1
2
2
2,
1
x
x
x
ªº ªº
==
«» «»
¬¼
¬¼
x
and one solution is
2.
1
ª
º
«
»
¬
¼
Since the entries in
2
1
ª
º
«
»
¬
¼
sum to 3, multiply by 1/3
to obtain the steady-state vector
2/3 .667 .
1/3 .333
ªºª º
=
«»« »
¬¼¬ ¼
q
b. Since
2/3 ,
1/3
ªº
=«»
¬¼
q
2/3 of the listeners will be listening to the news at some time late in the day.
12. From Exercise 2,
.6 .2 .2
.2 .6 .2 ,
.2 .2 .6
P
ª
º
«
»
=
«
»
«
»
¬
¼
so
.4 .2 .2
.2 .4 .2 .
.2 .2 .4
PI
ª
º
«
»
=
«
»
«
»
¬
¼
Solving (P I)x = 0 by row
reducing the augmented matrix gives
.4 .2 .2 0 1 0 1 0
.2 .4 .2 0 0 1 1 0
.2 .2 .4 0 0 0 0 0
−−
ªºªº
«»«»
−∼
«»«»
«»«»
¬¼¬¼
Thus
1
23
3
1
1,
1
x
xx
x
ªº ªº
«» «»
==
«» «»
«» «»
¬¼ ¬¼
x
and one solution is
1
1.
1
ª
º
«
»
«
»
«
»
¬
¼
Since the entries in
1
1
1
ª
º
«
»
«
»
«
»
¬
¼
sum to 3, multiply by 1/3 to
obtain the steady-state vector
1/3 .333
1/3 .333 .
1/3 .333
ªºª º
«»« »
=
«»« »
«»« »
¬¼¬ ¼
q
Thus in the long run each food will be preferred
equally.
13. a. From Exercise 3,
.95 .45 ,
.05 .55
Pªº
=«»
¬¼
so
.05 .45 .
.05 .45
PI
ª
º
=
«
»
¬
¼
Solving (P I)x = 0 by row
reducing the augmented matrix gives
.05 .45 0 1 9 0
.05 .45 0 0 0 0
−−
ªºªº
«»«»
¬¼¬¼
Thus
1
2
2
9,
1
x
x
x
ªº
ª
º
==
«»
«
»
¬
¼
¬¼
x
and one solution is
9.
1
ª
º
«
»
¬
¼
Since the entries in
9
1
ª
º
«
»
¬
¼
sum to 10, multiply by
1/10 to obtain the steady-state vector
9/10 .9 .
1/10 .1
ª
ºªº
==
«
»«»
¬
¼¬¼
q
b. After many days, a specific student is ill with probability .1, and it does not matter whether that
student is ill today or not.
264 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
14. From Exercise 4,
.4 .5 .3
.3 .2 .4 ,
.3 .3 .3
P
ª
º
«
»
=
«
»
«
»
¬
¼
so
.6 .5 .3
.3 .8 .4 .
.3 .3 .7
PI
ª
º
«
»
=
«
»
«
»
¬
¼
Solving (P I)x = 0 by row
reducing the augmented matrix gives
.6 .5 .3 0 1 0 4 / 3 0
.3 .8 .4 0 0 1 1 0
.3 .3 .7 0 0 0 0 0
−−
ªºªº
«»«»
−∼
«»«»
«»«»
¬¼¬¼
Thus
1
23
3
4/3
1,
1
x
xx
x
ªº ªº
«» « »
==
«» «»
«» « »
¬¼¬¼
x
and one solution is
4
3.
3
ª
º
«
»
«
»
«
»
¬
¼
Since the entries in
4
3
3
ª
º
«
»
«
»
«
»
¬
¼
sum to 10, multiply by
1/10 to obtain the steady-state vector
4/10 .4
3/10 .3 .
3/10 .3
ª
ºªº
«
»«»
==
«
»«»
«
»«»
¬
¼¬¼
q
Thus in the long run the chance that a day
has good weather is 40%.
15. [M] Let
.9821 .0029 ,
.0179 .9971
Pªº
=«»
¬¼
so
.0179 .0029 .
.0179 .0029
PI
ª
º
=
«
»
¬
¼
Solving (P I)x = 0 by row reducing
the augmented matrix gives
.0179 .0029 0 1 .162011 0
.0179 .0029 0 0 0 0
−−
ªºªº
«»«»
¬¼¬¼
Thus
1
2
2
.162011 ,
1
x
x
x
ªº ªº
==
«» «»
¬¼
¬¼
x
and one solution is
.162011 .
1
ª
º
«
»
¬
¼
Since the entries in
.162011
1
ªº
«»
¬¼
sum to
1.162011, multiply by 1/1.162011 to obtain the steady-state vector
.139423 .
.860577
ª
º
=
«
»
¬
¼
q
Thus about
13.9% of the total U.S. population would eventually live in California.
16. [M] Let
.90 .01 .09
.01 .90 .01 ,
.09 .09 .90
P
ªº
«»
=«»
«»
¬¼
so
.10 .01 .09
.01 .10 .01 .
.09 .09 .1
PI
ª
º
«
»
=
«
»
«
»
¬
¼
Solving (P I)x = 0 by row
reducing the augmented matrix gives
.10 .01 .09 0 1 0 .919192 0
.01 .10 .01 0 0 1 .191919 0
.09 .09 .1 0 0 0 0 0
−−
ªºªº
«»«»
−∼
«»«»
«»«»
¬¼¬¼
4.9 Solutions 265
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Thus
1
23
3
.919192
.191919 ,
1
x
xx
x
ªº ª º
«» « »
==
«» « »
«» « »
¬¼ ¬ ¼
x
and one solution is
.919192
.191919 .
1
ª
º
«
»
«
»
«
»
¬
¼
Since the entries in
.919192
.191919
1
ªº
«»
«»
«»
¬¼
sum
to 2.111111, multiply by 1/2.111111 to obtain the steady-state vector
.435407
.090909 .
.473684
ª
º
«
»
=
«
»
«
»
¬
¼
q
Thus on a
typical day, about (.090909)(2000) = 182 cars will be rented or available from the downtown
location.
17. a. The entries in each column of P sum to 1. Each column in the matrix P I has the same entries as
in P except one of the entries is decreased by 1. Thus the entries in each column of P I sum to 0,
and adding all of the other rows of P I to its bottom row produces a row of zeros.
b . By part a., the bottom row of P I is the negative of the sum of the other rows, so the rows of
P I are linearly dependent.
c . By part b. and the Spanning Set Theorem, the bottom row of P I can be removed and the
remaining (n – 1) rows will still span the row space of P I. Thus the dimension of the row space
of P I is less than n. Alternatively, let A be the matrix obtained from P I by adding to the
bottom row all the other rows. These row operations did not change the row space, so the row
space of P I is spanned by the nonzero rows of A. By part a., the bottom row of A is a zero row,
so the row space of P I is spanned by the first (n – 1) rows of A.
d . By part c., the rank of P I is less than n, so the Rank Theorem may be used to show that
dimNul(P I) = n rank(P I) > 0. Alternatively the Invertible Martix Theorem may be used
since P I is a square matrix.
18. If
α
=
β
= 0 then
10
.
01
P
ª
º
=
«
»
¬
¼
Notice that Px = x for any vector x in
2
, and that
1
0
ªº
«»
¬¼
and
0
1
ªº
«»
¬¼
are
two linearly independent steady-state vectors in this case.
If
α
0 or
β
0, we solve (P I)x = 0 where
.PI
α
β
α
β
ª
º
=
«
»
¬
¼
Row reducing the augmented
matrix gives
00
0000
αβ α β
αβ
−−
ªºªº
«»«»
¬¼¬¼
So
12
,xx
α
β
=
and one possible solution is to let
1,x
β
=
2
x
α
=
. Thus
1
2
.
x
x
β
α
ªºªº
==
«»«»
¬¼
¬¼
x
Since the
entries in
β
α
ªº
«»
¬¼
sum to
α
+
β
, multiply by 1/(
α
+
β
) to obtain the steady-state vector
1.
β
α
αβ
ª
º
=
«
»
+
¬
¼
q
19. a. The product Sx equals the sum of the entries in x. Thus x is a probability vector if and only if its
entries are nonnegative and Sx = 1.
b . Let
[]
12
,
n
P=…pp p
where
1
p
,
2
p
, ,
n
p
are probability vectors. By part a.,
[][]
12
11 1
n
SP S S S S=…==pp p
266 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
c . By part b., S(Px) = (SP)x = Sx = 1. The entries in Px are nonnegative since P and x have only
nonnegative entries. By part a., the condition S(Px) = 1 shows that Px is a probability vector.
20. Let
[]
12
,
n
P=…pp p
so
[]
2
12 .
n
PPPP P P== …
pp p
By Exercise 19c., the columns
of
2
P
are probability vectors, so
2
P
is a stochastic matrix.
Alternatively, SP = S by Exercise 19b., since P is a stochastic matrix. Right multiplication by P gives
2
,SP SP=
so SP = S implies that
2.SP S=
Since the entries in P are nonnegative, so are the entries
in
2
,P
and
2
P
is stochastic matrix.
21. [M]
a. To four decimal places,
23
.2779 .2780 .2803 .2941 .2817 .2817 .2817 .2814
.3368 .3355 .3357 .3335 .3356 .3356 .3355 .3352
,,
.1847 .1861 .1833 .1697 .1817 .1817 .1819 .1825
.2005 .2004 .2007 .2027 .2010 .2010 .2010 .2009
PP
ªºªº
«»«»
«»«»
==
«»«»
«»«»
«»«»
¬¼¬¼
45
.2816 .2816 .2816 .2816
.3355 .3355 .3355 .3355
.1819 .1819 .1819 .1819
.2009 .2009 .2009 .2009
PP
ªº
«»
«»
==
«»
«»
«»
¬¼
The columns of
k
P
are converging to a common vector as k increases. The steady state vector q
for P is
.2816
.3355 ,
.1819
.2009
ªº
«»
«»
=«»
«»
«»
¬¼
q
which is the vector to which the columns of
k
P
are converging.
b. To four decimal places,
10 20
.8222 .4044 .5385 .7674 .6000 .6690
.0324 .3966 .1666 , .0637 .2036 .1326 ,
.1453 .1990 .2949 .1688 .1964 .1984
QQ
ªºªº
«»«»
==
«»«»
«»«»
¬¼¬¼
30 40
.7477 .6815 .7105 .7401 .7140 .7257
.0783 .1329 .1074 , .0843 .1057 .0960 ,
.1740 .1856 .1821 .1756 .1802 .1783
QQ
ªºªº
«»«»
==
«»«»
«»«»
¬¼¬¼
50 60
.7372 .7269 .7315 .7360 .7320 .7338
.0867 .0951 .0913 , .0876 .0909 .0894 ,
.1761 .1780 .1772 .1763 .1771 .1767
QQ
ªºªº
«»«»
==
«»«»
«»«»
¬¼¬¼
70 80
.7356 .7340 .7347 .7354 .7348 .7351
.0880 .0893 .0887 , .0881 .0887 .0884 ,
.1764 .1767 .1766 .1764 .1766 .1765
QQ
ªºªº
«»«»
==
«»«»
«»«»
¬¼¬¼
Chapter 4 Supplementary Exercises 267
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
116 117
.7353 .7353 .7353
.0882 .0882 .0882
.1765 .1765 .1765
QQ
ª
º
«
»
==
«
»
«
»
¬
¼
The steady state vector q for Q is
.7353
.0882
.1765
ª
º
«
»
=
«
»
«
»
¬
¼
q
Conjecture: the columns of
,
k
P
where P is a
regular stochastic matrix, converge to the steady state vector for P as k increases.
c. Let P be an n × n regular stochastic matrix, q the steady state vector of P, and
j
e
the
th
j
column
of the n × n identity matrix. Consider the Markov chain
{}
k
x
where
1
kk
P
+
=
xx
and
0
.
j
e=x
By
Theorem 18,
0
k
k
P=xx
converges to q as k . But
0
kk
j
PP=xe
, which is the
th
j
column of
.
k
P
Thus the
th
j
column of
k
P
converges to q as k ; that is,
[]
k
P
qq q
.
22. [M] Answers will vary.
MATLAB Student Version 4.0 code for Method (1):
A=randstoc(32); flops(0);
tic, x=nulbasis(A-eye(32));
q=x/sum(x); toc, flops
MATLAB Student Version 4.0 code for Method (2):
A=randstoc(32); flops(0);
tic, B=A^100; q=B(: ,1); toc, flops
Chapter 4 SUPPLEMENTARY EXERCISES
1. a. True. This set is
1
Span{ , ... }
p
vv
, and every subspace is itself a vector space.
b. True. Any linear combination of
1
v
, ,
1p
v
is also a linear combination of
1
v
, ,
1p
v
,
p
v
using the zero weight on
.
p
v
c. False. Counterexample: Take
1
2
p
=vv
. Then
1
{,... }
p
vv
is linearly dependent.
d. False. Counterexample: Let
123
{, , }
ee e
be the standard basis for
3
. Then
12
{, }
ee
is a linearly
independent set but is not a basis for
3
.
e. True. See the Spanning Set Theorem (Section 4.3).
f. True. By the Basis Theorem, S is a basis for V because S spans V and has exactly p elements. So
S must be linearly independent.
g. False. The plane must pass through the origin to be a subspace.
h. False. Counterexample:
25 20
00 73
00 00
ªº
«»
«»
«»
¬¼
.
268 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
i. True. This statement appears before Theorem 13 in Section 4.6.
j. False. Row operations on A do not change the solutions of Ax = 0.
k. False. Counterexample:
12
36
Aªº
=«»
¬¼
; A has two nonzero rows but the rank of A is 1.
l. False. If U has k nonzero rows, then rank A = k and dimNul A = n k by the Rank Theorem.
m. True. Row equivalent matrices have the same number of pivot columns.
n. False. The nonzero rows of A span Row A but they may not be linearly independent.
o. True. The nonzero rows of the reduced echelon form E form a basis for the row space of each
matrix that is row equivalent to E.
p. True. If H is the zero subspace, let A be the 3 × 3 zero matrix. If dim H = 1, let {v} be a basis
for H and set
[]
A=vvv
. If dim H = 2, let {u,v} be a basis for H and set
[]
A=uvv
,
for example. If dim H = 3, then H =
3
, so A can be any 3 × 3 invertible matrix. Or, let {u, v,
w} be a basis for H and set
[]
A=uvw
.
q. False. Counterexample:
100
010
Aªº
=«»
¬¼
. If rank A = n (the number of columns in A), then the
transformation x6Ax is one-to-one.
r. True. If x6Ax is onto, then Col A =
m
and rank A = m. See Theorem 12(a) in Section 1.9.
s. True. See the second paragraph after Theorem 15 in Section 4.7.
t. False. The
th
j
column of
CB
P
is
.
jC
ª
º
¬
¼
b
2. The set is SpanS, where
125
258
,, .
147
311
S
½
ªºªºªº
°°
«»«»«»
°°
«»«»«»
=®¾
«»«»«»
−−
°°
«»«»«»
°°
«»«»«»
¬¼¬¼¬¼
¯¿
Note that S is a linearly dependent set, but each
pair of vectors in S forms a linearly independent set. Thus any two of the three vectors
1
2,
1
3
ªº
«»
«»
«»
«»
«»
¬¼
2
5,
4
1
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
5
8
7
1
ªº
«»
«»
«»
«»
«»
¬¼
will be a basis for SpanS.
3. The vector b will be in
12
Span{ , }W=
uu
if and only if there exist constants
1
c
and
2
c
with
11 2 2
.cc+=
uub
Row reducing the augmented matrix gives
11
212
3123
21 21
42 04 2
65 00 2
bb
bbb
bbbb
−−
ªºª º
«»« »
+
«»« »
«»« »
−− ++
¬¼¬ ¼
Chapter 4 Supplementary Exercises 269
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
so
12
Span{ , }W=
uu
is the set of all
123
(, , )bb b
satisfying
123
20.bbb++=
4. The vector g is not a scalar multiple of the vector f, and f is not a scalar multiple of g, so the set
{f, g} is linearly independent. Even though the number g(t) is a scalar multiple of f(t) for each t, the
scalar depends on t.
5. The vector
1
p
is not zero, and
2
p
is not a multiple of
1
.
p
However,
3
p
is
12
22+
pp
, so
3
p
is
discarded. The vector
4
p
cannot be a linear combination of
1
p
and
2
p
since
4
p
involves
2
t
but
1
p
and
2
p
do not involve
2
.t
The vector
5
p
is
124
(3/ 2) (1/ 2)+
ppp
(which may not be so easy to see
at first.) Thus
5
p
is a linear combination of
1
,
p
2
,
p
and
4
,
p
so
5
p
is discarded. So the resulting
basis is
124
{, , }.
pp p
6. Find two polynomials from the set
14
{,..., }
pp
that are not multiples of one another. This is easy,
because one compares only two polynomials at a time. Since these two polynomials form a linearly
independent set in a two-dimensional space, they form a basis for H by the Basis Theorem.
7. You would have to know that the solution set of the homogeneous system is spanned by two
solutions. In this case, the null space of the 18 × 20 coefficient matrix A is at most two-dimensional.
By the Rank Theorem, dimCol A = 20 – dimNul A 20 – 2 = 18. Since Col A is a subspace of
18
,
Col A =
18
. Thus Ax = b has a solution for every b in
18
.
8. If n = 0, then H and V are both the zero subspace, and H = V. If n > 0, then a basis for H consists of n
linearly independent vectors
1
,..., .
n
uu
These vectors are also linearly independent as elements of V.
But since dimV = n, any set of n linearly independent vectors in V must be a basis for V by the Basis
Theorem. So
1
,...,
n
uu
span V, and
1
Span{ , . .. , } .
n
HV==
uu
9. Let T:
n
m
be a linear transformation, and let A be the m × n standard matrix of T.
a. If T is one-to-one, then the columns of A are linearly independent by Theorem 12 in Section 1.9,
so dimNul A = 0. By the Rank Theorem, dimCol A = n – 0 = n, which is the number of columns
of A. As noted in Section 4.2, the range of T is Col A, so the dimension of the range of T is n.
b. If T maps
n
onto
m
, then the columns of A span
m
by Theoerm 12 in Section 1.9, so dimCol A
= m. By the Rank Theorem, dimNul A = n m. As noted in Section 4.2, the kernel of T is Nul A,
so the dimension of the kernel of T is n m. Note that n m must be nonnegative in this case:
since A must have a pivot in each row, n m.
10. Let
1
{ , . .. , }.
p
S=vv
If S were linearly independent and not a basis for V, then S would not span V.
In this case, there would be a vector
1p+
v
in V that is not in
1
Span{ , .. . , }.
p
vv
Let
11
{ , .. . , , }.
pp
S
+
=vvv
Then S is linearly independent since none of the vectors in S is a linear
combination of vectors that precede it. Since S has more elements than S, this would contradict the
maximality of S. Hence S must be a basis for V.
11. If S is a finite spanning set for V, then a subset of S is a basis for V. Denote this subset of S by .S
Since S is a basis for V, S must span V. Since S is a minimal spanning set, S cannot be a proper
subset of S. Thus S= S, and S is a basis for V.
270 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
12. a. Let y be in Col AB. Then y = ABx for some x. But ABx = A(Bx), so y = A(Bx), and y is in Col A.
Thus Col AB is a subspace of Col A, so rank AB = dimCol AB dimCol A = rank A by Theorem
11 in Section 4.5.
b. By the Rank Theorem and part a.:
rank rank( ) rank rank rank
TTTT
AB AB B A B B===
13. By Exercise 12, rank PA rank A, and
11
rank rank( ) rank ( ) rankAPPAPPAPA
−−
==
, so
rank PA = rank A.
14. Note that
() .
TTT
AQ Q A=
Since
T
Q
is invertible, we can use Exercise 13 to conclude that
rank( ) rank rank .
TTT T
AQ Q A A==
Since the ranks of a matrix and its transpose are equal (by the
Rank Theorem), rank AQ = rank A.
15. The equation AB = O shows that each column of B is in Nul A. Since Nul A is a subspace of
n
, all
linear combinations of the columns of B are in Nul A. That is, Col B is a subspace of Nul A. By
Theorem 11 in Section 4.5, rank B = dimCol B dimNul A. By this inequality and the Rank
Theorem applied to A,
n = rank A + dimNul A rank A + rank B
16. Suppose that
1
rank Ar=
and
2
rank Br=
. Then there are rank factorizations
11
ACR=
and
22
BCR=
of A and B, where
1
C
is
1
mr×
with rank
1
r
,
2
C
is
2
mr×
with rank
2
r
,
1
R
is
1
rn×
with rank
1
r
, and
2
R
is
2
rn×
with rank
2
.r
Create an
12
()mrr×+
matrix
[]
12
CCC=
and an
12
()rr n+×
matrix R
by stacking
1
R
over
2
.
R
Then
[]
1
11 2 2 1 2
2
R
ABCR CR C C CR
R
ªº
+= + = =
«»
¬¼
Since the matrix CR is a product, its rank cannot exceed the rank of either of its factors by Exercise
12. Since C has
12
rr+
columns, the rank of C cannot exceed
12
.rr+
Likewise R has
12
rr+
rows, so
the rank of R cannot exceed
12
.rr+
Thus the rank of A + B cannot exceed
12
rank rank ,rr A B+= +
or rank (A + B) rank A + rank B.
17. Let A be an m × n matrix with rank r.
(a) Let
1
A
consist of the r pivot columns of A. The columns of
1
A
are linearly independent, so
1
A
is an m × r matrix with rank r.
(b) By the Rank Theorem applied to
1
,A
the dimension of
1
RowA
is r, so
1
A
has r linearly
independent rows. Let
2
A
consist of the r linearly independent rows of
1
.A
Then
2
A
is an r × r
matrix with linearly independent rows. By the Invertible Matrix Theorem,
2
A
is invertible.
18. Let A be a 4 × 4 matrix and B be a 4 × 2 matrix, and let
03
,...,
uu
be a sequence of input vectors in
2
.
a. Use the equation
1kkk
AB
+
=+
xxu
for
0, . . . , 4,k=
with
0
.=
x0
1000
AB B=+=
xxuu
211 01
AB AB B=+= +
xxu uu
Chapter 4 Supplementary Exercises 271
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
2
322 01 2 0 12
()AB AABB B AB ABB=+= + += + +xxu uu u u uu
2
433 0 12 3
()AB AAB ABB B=+= + + +xxu u uu u
32
0123
AB AB AB B=+++uuuu
3
2
23
1
0
BABABAB M
ªº
«»
«»
ªº
==
¬¼
«»
«»
«»
¬¼
u
uu
u
u
Note that M has 4 rows because B does, and that M has 8 columns because B and each of the
matrices
k
AB
have 2 columns. The vector u in the final equation is in
8
, because each
k
u
is in
2
.
b. If (A, B) is controllable, then the controllability matrix has rank 4, with a pivot in each row, and
the columns of M span
4
. Therefore, for any vector v in
4
, there is a vector u in
8
such that
v = Mu. However, from part a. we know that
4
M
=
xu
when u is partitioned into a control
sequence
03
,,
uu
. This particular control sequence makes
4
.=
xv
19. To determine if the matrix pair (A, B) is controllable, we compute the rank of the matrix
2
.BABAB
ªº
¬¼
To find the rank, we row reduce:
2
010100
1.9.81 010.
1.5.25 001
BABAB
ªºªº
«»«»
ªº
=−∼
¬¼
«»«»
«»«»
¬¼¬¼
The rank of the matrix is 3, and the pair (A, B) is controllable.
20. To determine if the matrix pair (A, B) is controllable, we compute the rank of the matrix
2
.BABAB
ªº
¬¼
To find the rank, we note that :
2
1.5.19
1.7.45.
00 0
BABAB
ªº
«»
ªº
=
¬¼
«»
«»
¬¼
The rank of the matrix must be less than 3, and the pair (A, B) is not controllable.
21. [M] To determine if the matrix pair (A, B) is controllable, we compute the rank of the matrix
23
.BABABAB
ªº
¬¼
To find the rank, we row reduce:
23
10 0 1 100 1
00 1 1.60101.6
.
011.6 .960011.6
11.6 .96 .024 0 0 0 0
BABABAB
−−
ªºªº
«»«»
−−
«»«»
ªº
=
¬¼
«»«»
−− −
«»«»
−−
«»«»
¬¼¬¼
The rank of the matrix is 3, and the pair (A, B) is not controllable.
22. [M] To determine if the matrix pair (A, B) is controllable, we compute the rank of the matrix
23
.BABABAB
ªº
¬¼
To find the rank, we row reduce:
272 CHAPTER 4 Vector Spaces
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
23
10 0 1 1000
00 1 .5 0100
.
01 .5 11.45 0010
1.511.45 10.275 0001
BABABAB
ªºªº
«»«»
«»«»
ªº
=
¬¼
«»«»
«»«»
−−
«»«»
¬¼¬¼
The rank of the matrix is 4, and the pair (A, B) is controllable.
Copyright © 2012 Pea
r
5.1 SOLUTIONS
Notes
: Exercises 1–6 reinforce the d
e
eigenvectors and difference equations,
a
example and anticipates discussions of
d
1. The number 2 is an eigenvalue of
A
This equation is equivalent to
(A
32 20
238 02
AI
ªºªº
ª
==
«»«»«
¬¼¬¼
¬
The columns of A are obviously lin
e
so 2 is an eigenvalue of A.
2. The number 3 is an eigenvalue o
f
This equation is equivalent to
(A+
14 30
369 03
AI
ªºªº
+= + =
«»«»
¬¼¬¼
The columns of A are obviously lin
e
so
3
is an eigenvalue of A.
3. Is
Ax
a multiple of x? Compute
1
6
ª
«
¬
eigenvalue
2
.
4. Is
Ax
a multiple of x? Compute
5
3
ª
«
¬
eigenvalue
3
.
r
son Education, Inc. Publishing as Addison-Wesley.
e
finitions of eigenvalues and eigenvectors. The s
u
a
long with Exercises 33 and 34, refers to the chapter
d
ynamical systems in Sections 5.2 and 5.6.
A
if and only if the equation
2A=xx
has a nontrivial
s
2) .=
x
I0
Compute
12
36
ª
º
»
¬
¼
e
arly dependent, so
(2)AI=
x
0
has a nontrivial sol
u
f
A if and only if the equation
3A=xx
has a nontriv
i
3) .I=
x
0
Compute
24
612
ªº
«»
¬¼
e
arly dependent, so
(3)AI+=
x
0
has a nontrivial sol
u
1
11 2 1
2.
6
43 6 3
−−
ºª º ª º ª º
==
»« » « » « »
−−
¼¬ ¼ ¬ ¼ ¬ ¼ So 1
3
ªº
«»
¬¼
is an eigenvector
5
21 3 1
3
3
61 3 1
ªº
«»
«»
«»
¬¼
−− −
ºª º ª º
==
»« » « »
¼¬ ¼ ¬ ¼ So 1
1
ªº
«»
¬¼
is an eigenvecto
r
273
u
bsection on
introductory
s
olution.
u
tion, and
i
al solution.
u
tion, and
of A with
r
of A with
274 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
5. Is
A
x
a multiple of x? Compute
4333 15 3
2322 10 52.
1021 5 1
−−
ªºªºªºªº
«»«»«»«»
−−==−−
«»«»«»«»
«»«»«»«»
−− −
¬¼¬¼¬¼¬¼
So
3
2
1
ªº
«»
«»
«»
¬¼
is an eigenvector
of A for the eigenvalue
5
.
6. Is
A
x
a multiple of x? Compute
367 1 5 1
327 2 13 2
564 2 1 2
λ
ªºªºªºªº
«»«»«»«»
=≠−
«»«»«»«»
«»«»«»«»
¬¼¬¼¬¼¬¼
So
1
2
2
ª
º
«
»
«
»
«
»
¬
¼
is not
an eigenvector of A.
7. To determine if 4 is an eigenvalue of A, decide if the matrix
4AI
is invertible.
30 1 400 1 0 1
4231040211
34 5 004 3 4 1
AI
−−
ªºªºª º
«»«»« »
==
«»«»« »
«»«»« »
−−
¬¼¬¼¬ ¼
Invertibility can be checked in several ways, but since an eigenvector is needed in the event that one
exists, the best strategy is to row reduce the augmented matrix for
(4)AI=
x0
:
10 10 10 10 10 10
2110 0110 0110
3410 0440 0000
−− −−
ªºªºªº
«»«»«»
−−
«»«»«»
«»«»«»
¬¼¬¼¬¼

The equation
(4)AI=
x0
has a nontrivial solution, so 4 is an eigenvalue. Any nonzero solution of
(4)AI=
x0
is a corresponding eigenvector. The entries in a solution satisfy
13
0xx+=
and
23
0,−− =xx with
3
x free. The general solution is not requested, so to save time, simply take any
nonzero value for
3
x to produce an eigenvector. If
3
1,=x then
(1 11).=,,
x
Note:
The answer in the text is
(1 1 1),,,
written in this form to make the students wonder whether the
more common answer given above is also correct. This may initiate a class discussion of what answers
are “correct.”
8. To determine if 1 is an eigenvalue of A, decide if the matrix
AI
is invertible.
423 100 323
013010 023
122 001 123
AI
−−
ªºªºªº
«»«»«»
=−− =
«»«»«»
«»«»«»
−− −−
¬¼¬¼¬¼
Row reducing the augmented matrix
[(A ) ]I
0
yields:
3230 1230 1000 10 00
0230 0230 0230 013/20
1230 0460 0000 00 00
−−
ªºªºªºª º
«»«»«»« »
−− −
«»«»«»« »
«»«»«»« »
−− −
¬¼¬¼¬¼¬ ¼

The equation
()AI=
x0
has a nontrivial solution, so 1 is an eigenvalue. Any nonzero solution of
()AI=
x0
is a corresponding eigenvector. The entries in a solution satisfy
1
0x= and
5.1 Solutions 275
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
23
(3 / 2) 0,xx= with
3
x free. The general solution is not requested, so to save time, simply take
any nonzero value for
3
x to produce an eigenvector. If
3
2,x= then
0
3.
2
ª
º
«
»
=
«
»
«
»
¬
¼
x
9. For 30 10 20
11
21 01 20
AI
λ
ª
ºª ºª º
=: ==
«
»« »« »
¬
¼¬ ¼¬ ¼
The augmented matrix for
()AI=
x0
is 200
.
200
ª
º
«
»
¬
¼ Thus
1
0x= and
2
x is free. The general
solution of
()AI=
x0
is
22
,ex where
2
0,
1
ª
º
=
«
»
¬
¼
e and so
2
e is a basis for the eigenspace
corresponding to the eigenvalue 1.
For 30 30 0 0
33
21 03 2 2
AI
λ
ª
ºª ºª º
=: ==
«
»« »« »
¬
¼¬ ¼¬ ¼
The equation
(3)AI=
x0
leads to
12
22 0,xx= so that
12
xx= and
2
x is free. The general
solution is
12
2
22
1.
1
xx
x
xx
ªºªº
«»«»
«»«»
«»«»
¬¼¬¼
ªº
==
«»
¬¼
So 1
1
ªº
«»
¬¼
is a basis for the eigenspace.
Note:
For simplicity, the answer omits the set brackets when listing a basis. I permit my students to
list a basis without the set brackets. Some instructors may prefer to include brackets.
10. For 42 10 12
55 5 .
31 01 36
AI
λ
ª
ºª ºª º
=:+= + =
«
»« »« »
¬
¼¬ ¼¬ ¼
The augmented matrix for
(5)AI+=
x0
is 120 120
.
360 000
ª
ºª º
«
»« »
¬
¼¬ ¼
Thus
12
2xx= and
2
x is
free. The general solution is
12
2
22
22.
1
xx
x
xx
ªºª º
«»« »
«»« »
«»« »
¬¼¬ ¼
ª
º
==
«
»
¬
¼
A basis for the eigenspace corresponding to
5
is 2.
1
ªº
«»
¬¼
11. For
1
λ
=
: 13 10 23
45 01 46
AI
−−
ª
ºª ºª º
+= + =
«
»« »« »
−−
¬
¼¬ ¼¬ ¼
The augmented matrix for
()AI+=
x0
is 230 13/20
.
460 0 0 0
−−
ª
ºª º
«
»« »
¬
¼¬ ¼
Thus
12
(3 / 2)xx= and
2
x is free. The general solution is
12
2
22
(3 / 2) 3/2 .
1
xx
x
xx
ªº
«»
«»
«»
¬¼
ªº
ª
º
==
«»
«
»
¬
¼
¬¼
A basis for the eigenspace
corresponding to
1
is 3/2 .
1
ªº
«»
¬¼
Another choice is 3.
2
ª
º
«
»
¬
¼
276 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
For
7
λ
=
: 13 70 63
745 07 42
AI
−−
ª
ºª ºª º
==
«
»« »« »
−−
¬
¼¬ ¼¬ ¼
The augmented matrix for
(7)AI=
x0
is 630 11/20
.
420 000
−−
ª
ºª º
«
»« »
−−
¬
¼¬ ¼
Thus
12
(1/2)xx= and
2
x
is free. The general solution is
12
2
22
(1/2) 1/ 2 .
1
xx
x
xx
ªº
«»
«»
«»
¬¼
ªº
ª
º
==
«»
«
»
¬
¼
¬¼
A basis for the eigenspace
corresponding to 7 is 1/ 2 .
1
ª
º
«
»
¬
¼ Another choice is 1.
2
ª
º
«
»
¬
¼
12. For 41 30 11
33
36 03 33
AI
λ
ª
ºª ºª º
=: ==
«
»« »« »
¬
¼¬ ¼¬ ¼
The augmented matrix for
(3)AI=
x0
is 110 110
.
330 000
ª
ºª º
«
»« »
¬
¼¬ ¼
Thus
12
xx= and
2
x is free.
A basis for the eigenspace corresponding to 1 is 1.
1
ª
º
«
»
¬
¼
For 41 70 3 1
77 .
36 07 3 1
AI
λ
ª
ºª ºª º
=: ==
«
»« »« »
¬
¼¬ ¼¬ ¼
The augmented matrix for
(7)AI=
x0
is 310 11/30
.
310 0 00
−−
ª
ºª º
«
»« »
¬
¼¬ ¼
Thus
12
(1 / 3)xx= and
2
x is free. The general solution is
12
2
22
(1 / 3) 1/3 .
1
xx
x
xx
ªºª º
«»« »
«»« »
«»« »
¬¼¬ ¼
ª
º
==
«
»
¬
¼
A basis for the eigenspace
corresponding to 7 is 1/3 .
1
ª
º
«
»
¬
¼ Another choice is 1.
3
ª
º
«
»
¬
¼
13. For λ = 1:
401 100 301
1210010200
201 001 200
AI
ªºªºªº
«»«»«»
=−− =
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
The equations for
()AI=
x0
are easy to solve:
13
1
30
2 0
xx
x
+=
½
®
¾
=
¯¿
Row operations hardly seem necessary. Obviously
1
x is zero, and hence
3
x is also zero. There are
three-variables, so
2
x is free. The general solution of
()AI=
x0
is
22
,ex where
2
0
1,
0
ªº
«»
=«»
«»
¬¼
e
and
so
2
e provides a basis for the eigenspace.
For λ = 2:
5.1 Solutions 277
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
401 200 2 0 1
2210020210
201 002 2 0 1
AI
ªºªºª º
«»«»« »
=−− =−−
«»«»« »
«»«»« »
−−
¬¼¬¼¬ ¼
2010 2010 0120
[( 2 ) ] 2 1 0 0 0 1 1 0 0 1 0
2010 0000 0000
AI
1/
ª
ºª ºª º
«
»« »« »
=−− 1
«
»« »« »
«
»« »« »
−−
¬
¼¬ ¼¬ ¼
0
So
1323
(1 2) ,xxxx=/,=
with
3
x free. The general solution of
(2)AI=
x0
is
3
12
1.
1
/
ªº
«»
«»
«»
¬¼
x
A nice
basis vector for the eigenspace is
1
2.
2
ªº
«»
«»
«»
¬¼
For λ = 3:
401 300 1 0 1
3210030220
201 003 2 0 2
ªºªºª º
«»«»« »
=−− =−−
«»«»« »
«»«»« »
−−
¬¼¬¼¬ ¼
AI
10 10 1010 010
[( 3 ) ] 2 2 0 0 0 2 2 0 0 1 0
2020 0000 0000
1
ª
ºª ºª º
«
»« »« »
=−− 1
«
»« »« »
«
»« »« »
−−
¬
¼¬ ¼¬ ¼
0
AI
So
1323
,=,=xxxx
with
3
x free. A basis vector for the eigenspace is
1
1.
1
ª
º
«
»
«
»
«
»
¬
¼
14. For
401300 101
3(3)3303030333.
225003 222
AIAI
λ
−−
ªºªºªº
«»«»«»
=: ===
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
The augmented matrix for
(3)AI=
x0
is
1010 1010 10 10
[( 3 ) ] 3 3 3 0 0 3 6 0 0 1 2 0
2220 0000 0000
AI
−−
ª
ºª ºª º
«
»« »« »
=−− −
«
»« »« »
«
»« »« »
¬
¼¬ ¼¬ ¼
0
Thus
132 3
2,xxx x=, = with
3
x free. The general solution of
(3)AI=
x0
is
3
1
2.
1
x
ªº
«»
«»
«»
¬¼
A basis for the eigenspace corresponding to 2 is
1
2
1
ª
º
«
»
«
»
«
»
¬
¼
.
278 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
15. For
1110 1110
5[( 5) ] 2220 0000.
3330 0000
AI
λ
ªºªº
«»«»
=:+ =
«»«»
«»«»
¬¼¬¼
0
Thus
123
0,xx x++=
with
2
x and
3
x free. The general solution of
(5)AI+=
x0
is
23
23
2
3
11
10
01
xx
xx x
x
ªº
«»
«»
«»
«»
«»
«»
¬¼
−− − −
ª
ºªº
«
»«»
==+.
«
»«»
«
»«»
¬
¼¬¼
xA basis for the
eigenspace corresponding to
5
is
11
10
01
½
−−
ª
ºª º
°°
«
»« »
,
®¾
«
»« »
°°
«
»« »
¬
¼¬ ¼
¯¿
Note:
For simplicity, the text answer omits the set brackets. I permit my students to list a basis without
the set brackets. Some instructors may prefer to include brackets.
16. For
[]
10 10 10 10 10 10
1100 0110 0110
4(4) .
2110 0110 0000
4220 0220 0000
AI
λ
−−
ªºªºªº
«»«»«»
−− −
«»«»«»
=: ==
«»«»«»
−− −
«»«»«»
−− −
¬¼¬¼¬¼
0
So
1323
,xxx x=, = with
3
x and
4
x
free variables. The general solution of
(4)AI=
x0
is
13
23
34
33
44
10 10
10 10
Basis for the eigenspace
10 10
01 01
xx
xx
xx
xx
xx
ªºªº
«»«»
«»«»
«»«»
«»«»
«»«»
«»«»
«»«»
«»«»
¬¼¬¼
½
ªº ªº ªºªº
°
°
«» «» «»«»
°
°
«» «» «»«»
=== + . : ,
®
¾
«» «» «»«»
°
°
«» «» «»«»
°
°
¬¼ ¬¼ ¬¼¬¼
¯¿
x
Note:
I urge my students always to include the extra column of zeros when solving a homogeneous
system. Exercise 16 provides a situation in which failing to add the column is likely to create problems
for a student, because the matrix
4AI
itself has a column of zeros.
17. The eigenvalues of
00 0
03 4
00 2
ªº
«»
«»
«»
¬¼
are 0, 3, and
2
, the numbers on the main diagonal, by Theorem
1.
18. The eigenvalues of
500
000
103
ªº
«»
«»
«»
¬¼
are 5, 0, and 3, the numbers on the main diagonal, by Theorem 1.
19. The matrix
123
123
123
ªº
«»
«»
«»
¬¼
is not invertible because its columns are linearly dependent. So the number 0
is an eigenvalue of the matrix. See the discussion following Example 5.
5.1 Solutions 279
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
20. The matrix
222
222
222
A
ªº
«»
=«»
«»
¬¼
is not invertible because its columns are linearly dependent. So the
number 0 is an eigenvalue of A. Eigenvectors for the eigenvalue 0 are solutions of
A=x0
and
therefore have entries that produce a linear dependence relation among the columns of A. Any
nonzero vector (in
3
R
) whose entries sum to 0 will work. Find any two such vectors that are not
multiples; for instance,
1
1
2
ªº
«»
«»
«»
¬¼
and
1
1.
0
ªº
«»
«»
«»
¬¼
21. a. False. The equation
A=λxx
must have a nontrivial solution.
b. True. See the paragraph after Example 5.
c. True. See the discussion of equation (3).
d. True. See Example 2 and the paragraph preceding it. Also, see the Numerical Note.
e. False. See the warning after Example 3.
22. a. False. The vector x in
A=λxx
must be nonzero.
b. False. See Example 4 for a two-dimensional eigenspace, which contains two linearly independent
eigenvectors corresponding to the same eigenvalue. The statement given is not at all the same as
Theorem 2. In fact, it is the converse of Theorem 2 (for the case
2r=
).
c. True. See the paragraph after Example 1.
d. False. Theorem 1 concerns a triangular matrix. See Examples 3 and 4 for counterexamples.
e. True. See the paragraph following Example 3. The eigenspace of A corresponding to
λ
is the null
space of the matrix
.−λAI
23. If a
22×
matrix A were to have three distinct eigenvalues, then by Theorem 2 there would
correspond three linearly independent eigenvectors (one for each eigenvalue). This is impossible
because the vectors all belong to a two-dimensional vector space, in which any set of three vectors is
linearly dependent. See Theorem 8 in Section 1.7. In general, if an nn× matrix has p distinct
eigenvalues, then by Theorem 2 there would be a linearly independent set of p eigenvectors (one for
each eigenvalue). Since these vectors belong to an n-dimensional vector space, p cannot exceed n.
24. A simple example of a
22×
matrix with only one distinct eigenvalue is a triangular matrix with the
same number on the diagonal. By experimentation, one finds that if such a matrix is actually a
diagonal matrix then the eigenspace is two dimensional, and otherwise the eigenspace is only one
dimensional.
Examples:
41
04
ªº
«»
¬¼
and 45
.
04
ªº
«»
¬¼
25. If is an eigenvalue of A, then there is a nonzero vector
x
such that
.=xxA
Since A is invertible,
11
(),
−−
=xxAA A
and so
1
().
=xxA
Since
x0
(and since A is invertible), cannot be zero.
Then
11
,A
−−
=xx
which shows that
1
is an eigenvalue of
1
.
A
280 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Note:
The Study Guide points out here that the relation between the eigenvalues of A and
1
A
is
important in the so-called inverse power method for estimating an eigenvalue of a matrix. See Section 5.8.
26. Suppose that
2
A
is the zero matrix. If
A=xx
for some
,x0
then
22
() () .AAAA A====xx xxx
Since x is nonzero, must be zero. Thus each eigenvalue of A is
zero.
27. Use the Hint in the text to write, for any
()() .
TT TT
AI A I A I,==
Since
( )
T
AI
is
invertible if and only if
AI
is invertible (by Theorem 6(c) in Section 2.2), it follows that
T
AI
is not invertible if and only if
AI
is not invertible. That is, is an eigenvalue of
T
A
if and only
if is an eigenvalue of A.
Note:
If you discuss Exercise 27, you might ask students on a test to show that A and
T
A
have the same
characteristic polynomial (discussed in Section 5.2). Since det
det ,=
T
AA
for any square matrix A,
det( )det( )det( ()) det( )
TTT
AI AI A I AI===.
28. If A is lower triangular, then
T
A
is upper triangular and has the same diagonal entries as A. Hence,
by the part of Theorem 1 already proved in the text, these diagonal entries are eigenvalues of .
T
A By
Exercise 27, they are also eigenvalues of A.
29. Let v be the vector in
n
R
whose entries are all ones. Then
.As=vv
30. Suppose the column sums of an nn× matrix A all equal the same number s. By Exercise 29 applied
to
T
A
in place of A, the number s is an eigenvalue of .
T
A By Exercise 27, s is an eigenvalue of A.
31. Suppose T reflects points across (or through) a line that passes through the origin. That line consists
of all multiples of some nonzero vector v. The points on this line do not move under the action of A.
So
() .=vvT
If A is the standard matrix of T, then
.=vvA
Thus v is an eigenvector of A
corresponding to the eigenvalue 1. The eigenspace is Span
{}.v
Another eigenspace is generated by
any nonzero vector u that is perpendicular to the given line. (Perpendicularity in
2
R
should be a
familiar concept even though orthogonality in
n
R
has not been discussed yet.) Each vector x on the
line through u is transformed into the vector
.x
The eigenvalue is
1.
32. Since T rotates points around a given line, the points on the line are not moved at all. Hence 1 is an
eigenvalue of the standard matrix A of T, and the corresponding eigenspace is the line the points are
being rotated around.
33. (The solution is given in the text.)
a. Replace k by
1k+
in the definition of
,
x
k
and obtain
11
11 2
.
kk
k
cc
λµ
++
+
=+xuv
b.
12
12
12
1
()
by linearity
since and are eigenvectors
+
=+
=+
=+
=
xuv
uv
uvuv
x
kk
k
kk
kk
k
AAc c
cA c A
cc
λµ
λµ
λλ µµ
5.1 Solutions 281
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
34. You could try to write
0
x as linear combination of eigenvectors,
1
.,,vv
p
If
1
λ,,λ
p
are
corresponding eigenvalues, and if
011
,
pp
cc=++"xv v
then you could define
11 1
kk
kppp
cc
λλ
=++xv v"
In this case, for
012 ,=,,,k…
11 1
11 1
11
11 1
1
()
Linearity
The are eigenvectors
kk
kppp
kk
pp p
kk
pp p i
k
AAc c
cA c A
cc
λλ
λλ
λλ
++
+
=++
=++
=++ .
=
xv v
vv
vvv
x
"
"
"
35. Using the figure in the exercise, plot
()Tu
as
2,u
because u is an eigenvector for the eigenvalue 2 of
the standard matrix A. Likewise, plot
()
T
v
as
3,v
because v is an eigenvector for the eigenvalue 3.
Since T is linear, the image of w is
() ( ) () ().
=+= +
wuvuv
TT TT
36. As in Exercise 35,
()
T=
uu
and
() 3
T=
vv
because u and v are eigenvectors for the eigenvalues
1
and 3, respectively, of the standard matrix A. Since T is linear, the image of w is
() ( ) () ().
TT TT
=+= +wuvuv
Note
: The matrix programs supported by this text all have an eigenvalue command. In some cases, such
as MATLAB, the command can be structured so it provides eigenvectors as well as a list of the
eigenvalues. At this point in the course, students should not use the extra power that produces
eigenvectors. Students need to be reminded frequently that eigenvectors of A are null vectors of a
translate of A. That is why the instructions for Exercises 35–38 tell students to use the method of Example
4.
It is my experience that nearly all students need manual practice finding eigenvectors by the method
of Example 4, at least in this section if not also in Sections 5.2 and 5.3. However, [M] exercises do create
a burden if eigenvectors must be found manually. For this reason, the data files for the text include a
special command, nulbasis for each matrix program (MATLAB, Maple, etc.). The output of
nulbasis (A) is a matrix whose columns provide a basis for the null space of A, and these columns
are identical to the ones a student would find by row reducing the augmented matrix
[ ].0
A
With
nulbasis, student answers will be the same (up to multiples) as those in the text. I encourage my students
to use technology to speed up all numerical homework here, not just the
[]M
exercises,
37. [M] Let A be the given matrix. Use the MATLAB commands eig and nulbasis (or equivalent
commands). The command
ev = eig(A)
computes the three eigenvalues of A and stores them in a
vector ev. In this exercise,
(10 15 5).=,,
ev
The eigenspace for the eigenvalue 10 is the null space
of
10 .AI
Use nulbasis to produce a basis for each null space:
3
2.
1
ª
º
«
»
«
»
«
»
¬
¼
nulbasis(A -ev(1)*eye(3))=
Basis for the eigenspace for
3
10 2
1
λ
½
ª
º
°
°
«
»
=:
®
¾
«
»
°
°
«
»
¬
¼
¯¿
282 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
For the next eigenvalue, 15, compute nulbasis
2
2.
1
ª
º
«
»
«
»
«
»
¬
¼
(A -ev(2)* eye(3))=
Basis for the eigenspace for
2
15 2
1
λ
½
ª
º
°°
«
»
=:
®¾
«
»
°°
«
»
¬
¼
¯¿
.
For the next eigenvalue, 5, compute nulbasis
1/2
1/2 .
1
ª
º
«
»
«
»
«
»
¬
¼
(A -ev(3)*eye(3))=
Scaling this vector by 2 to eliminate the fractions provides a basis for the eigenspace for
1
51
2
λ
½
ªº
°°
«»
=:
®¾
«»
°°
«»
¬¼
¯¿
.
38. [M]
(2 2 1 1).=,,,
ev = eig(A)
For
2
λ
=
:
22
22
Basis:
11
11
½
ª
ºªº
°
°
«
»«»
°
°
«
»«»
.
®
¾
«
»«»
°
°
«
»«»
°
°
¬
¼¬¼
¯¿
nulbasis (A -ev(1)* eye(4))=
For
2
λ
=
: nulbasis
0
1.
1
0
ª
º
«
»
«
»
«
»
«
»
¬
¼
(A -ev(2)* eye(4))=
Basis:
0
1
1
0
½
ª
º
°
°
«
»
°
°
«
»
®
¾
«
»
°
°
«
»
°
°
¬
¼
¯¿
For
1
λ
=
: nulbasis
1
1.
0
1
ª
º
«
»
«
»
«
»
«
»
¬
¼
(A -ev(3)* eye(4))=
Basis:
1
1
0
1
½
ª
º
°
°
«
»
°
°
«
»
®
¾
«
»
°
°
«
»
°
°
¬
¼
¯¿
For
1
λ
=
: nulbasis
2
2.
0
1
ª
º
«
»
«
»
«
»
«
»
¬
¼
(A -ev(4)* eye(4))=
Basis:
2
2
0
1
½
ª
º
°
°
«
»
°
°
«
»
®
¾
«
»
°
°
«
»
°
°
¬
¼
¯¿
39. [M] For
4
λ
=
, basis:
0
1
.
2
0
1
½
ªº
°°
«»
°°
«»
°°
«»
®¾
«»
°°
«»
°°
«»
°°
¬¼
¯¿
For
12,λ=
basis:
02
01
12
10
01
½
ª
ºªº
°
°
«
»«»
°
°
«
»«»
°
°
«
»«»
,
®
¾
«
»«»
°
°
«
»«»
°
°
«
»«»
°
°
¬
¼¬¼
¯¿
. For
8λ=
, basis:
60
30
31
20
01
½
ªºª º
°°
«»« »
°°
«»« »
°°
«»« »
,
®¾
«»« »
°°
«»« »
°°
«»« »
°°
¬¼¬ ¼
¯¿
.
5.2 Solutions 283
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
40. [M] For
14
λ
=
, basis:
110
06
,.
10
05
01
½
ªºªº
°°
«»«»
°°
«»«»
°°
«»«»
®¾
«»«»
°°
«»«»
°°
«»«»
°°
¬¼¬¼
¯¿
For
42,λ=
basis:
05
11
23
50
05
½
ª
ºª º
°
°
«
»« »
°
°
«
»« »
°
°
«
»« »
,
−−
®
¾
«
»« »
°
°
«
»« »
°
°
«
»« »
°
°
¬
¼¬ ¼
¯¿
.
Note:
Since so many eigenvalues in text problems are small integers, it is easy for students to form a
habit of entering a value for in nulbasis
λ(A - I)
based on a visual examination of the eigenvalues
produced by eig(A)when only a few decimal places for are displayed. Using
ev = eig(A)
and:
nulbasis
(A -ev(j)*eye(n))
help avoid problems caused by roundoff errors.
5.2 SOLUTIONS
Notes
: Exercises 9–14 can be omitted, unless you want your students to have some facility with
determinants of
33×
matrices. In later sections, the text will provide eigenvalues when they are needed
for matrices larger than
22.×
If you discussed partitioned matrices in Section 2.4, you might wish to
bring in Supplementary Exercises 12–14 in Chapter 5. (Also, see Exercise 14 of Section 2.4.)
Exercises 25 and 27 support the subsection on dynamical systems. The calculations in these exercises
and Example 5 prepare for the discussion in Section 5.6 about eigenvector decompositions.
1. 27 27 0 2 7 .
72 72 0 7 2
ªº ªºª ºª º
=,==
«» «»« »« »
¬¼ ¬¼¬ ¼¬ ¼
AAI
λλ
λ
λλ
The characteristic polynomial is
22 2 2
det( ) (2 ) 7 4 4 49 4 45AI−λ =−λ − =−λ+λ− =λ−λ
In factored form, the characteristic equation is
(9)(5)0,λ− λ+=
so the eigenvalues of A are 9 and
5.
2. 41 4 1
.
61 6 1
AAI
λ
λ
λ
−− − −
ªº ª º
=,=
«» « »
¬¼ ¬ ¼
The characteristic polynomial is
2
det( ) ( 4 )(1 ) 6 ( 1) 3 2AI
λλλ λλ
=−− − ⋅ =++
Since
2
32(2)(1),
λλ λ λ
++=+ +
the eigenvalues of A are
2
and
1
.
3. 42 4 2 .
67 6 7
AAI
λ
λ
λ
−−
ªº ª º
=,=
«» « »
¬¼ ¬ ¼
The characteristic polynomial is
2
det( ) ( 4 )(7 ) (2)(6) 40AI−λ =−−λ −λ =λ−3λ−
Use the quadratic formula to solve the characteristic equation and find the eigenvalues:
2
4 3 9 160 3 13 8, 5
222
b b ac
a
λ
±±+ ±
====
284 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
4.82 8 2
.
33 3 3
AAI
λ
λ
λ
ªº ª º
=,=
«» « »
¬¼ ¬ ¼
The characteristic polynomial of A is
2
det( ) (8 )(3 ) (3)(2) 11 18 ( 9)( 2).AI
λλλ λλλλ
=−−=+=−−
The eigenvalues of
A
are
9 and 2.
5. 84 8 4 .
48 4 8
AAI
−λ
ªº ª º
=,−λ =
«» « »
−λ
¬¼ ¬ ¼
The characteristic polynomial of A is
2
det( ) (8 )(8 ) (4)(4) 16 48 ( 4)( 12)AI
λ
−λ =−λ −λ − =λ− λ+=λ− −
Thus, A has eigenvalues 4 and 12.
6. 92 9 2
.
25 2 5
AAI
λ
λ
λ
−−
ªº ª º
=,=
«» « »
¬¼ ¬ ¼
The characteristic polynomial is
2
det( ) (9 )(5 ) ( 2)(2) 14 49 ( 7)( 7)AI
λλλ λλλλ
=−−=+=−−
Thus A has only one eigenvalue, 7, with multiplicity 2.
7. 53 5 3 .
44 4 4
ªº ª º
=,=
«» « »
−−
¬¼ ¬ ¼
AAI
λ
λ
λ
The characteristic polynomial is
2
det( ) (5 )(4 ) (3)( 4) 9 32AI
λλλ λλ
=−−=+
Use the quadratic formula to solve det
()0AI
λ
=
:
9814(32)
947
22
λ
±±
==
These values are complex numbers, not real numbers, so A has no real eigenvalues. There is no
nonzero vector x in
2
R
such that
,=xxA
λ
because a real vector
Ax
cannot equal a complex
multiple of x.
8. 43 4 3.
21 2 1
AAI
λ
λ
λ
−−
ªº ª º
=,=
«» « »
¬¼ ¬ ¼
The characteristic polynomial is
2
det( ) ( 4 )(1 ) (2)(3) 3 10 ( 5)( 2)AI
λλλ λλλλ
=−− − − =+=+
Thus, A has eigenvalues
5
and 2.
9.
401
det( ) det 0 4 1 .
102
AI
λ
λλ
λ
−−
ªº
«»
=−−
«»
«»
¬¼
Using cofactor expansion down column 1, the
characteristic polynomial is
2
2
32
(4 ) 1 0 1
det( ) (4 ) det 0 1 det
0(2) (4)1
(4 )( 6 8) (4 )
(4 )( 6 9) (4 )(3 )(3 )
10 33 36
AI
λ
λλ
λλ
λλ λ λ
λλ λ λ λ λ
λλλ
−− −
ªºªº
=−⋅ ++
«»«»
−−
¬¼¬¼
=−−++
=−−+=−−−
=++
5.2 Solutions 285
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
This matrix has eigenvalue 4 with multiplicity 1, and eigenvalue 3 with multiplicity 2.
10.
311
det( ) det 0 5 0 .
207
AI
λ
λλ
λ
ªº
«»
=
«»
«»
−−
¬¼
Using cofactor expansion down the first column, the
characteristic polynomial is
2
2
32
(5 ) 0 1 1
det( ) (3 ) det 0 ( 2)det
0(7) (5)0
(5 )( 10 21) 2(5 )
(5 )( 10 23)
15 73 115
AI
λ
λλ
λλ
λλ λ λ
λλ λ
λλ λ
ªºªº
=−⋅ ++
«»«»
−−
¬¼¬¼
=−−++
=−−+
=++
11. The special arrangements of zeros in A makes a cofactor expansion along the first row highly
effective.
232
300 14
det( ) det 2 1 4 (3 ) det 04
104
(3 )(1 )(4 ) (3 )( 5 4) 8 19 12
AI
λ
λ
λλλ
λ
λ
λλλλλλλλ
ªº
ªº
«»
==«»
«»
¬¼
«»
¬¼
=−−λ=−−+=++
If only the eigenvalues were required, there would be no need here to write the characteristic
polynomial in expanded form.
12. Make a cofactor expansion along the third row:
32
102 12 1 0
det( ) det 3 1 0 1 det (2 ) det
30 31
012
6(2 )(1 )(1 ) 2 4
AI
λ
λλ
λλ λ
λ
λ
λλλλλλ
−−
ªº
−− −−
ª
ºªº
«»
==−⋅ +−⋅
«
»«»
«»
¬
¼¬¼
«»
¬¼
=+ −−− −=+++
13. Make a cofactor expansion down the third column:
2
32
620 62
det( ) det 2 9 0 (3 ) det 29
583
(3 )[(6 )(9 ) ( 2)( 2)] (3 )( 15 50)
18 95 150 or (3 )( 5)( 10)
AI
λ
λ
λλλ
λ
λ
λλλ λλλ
λλ λ λλλ
−−
ªº
−−
ª
º
«»
=−− =−⋅
«
»
«»
−−
¬
¼
«»
¬¼
=−−=−−+
=++−−
14. Make a cofactor expansion along the second column:
32
32
401 41 41
det( ) det 1 4 ( ) det 2 det
03 14
023
()[(4 )(3 )]2[(4 )41] 7 12 8 30
7430
AI
λ
λλ
λλλ
λ
λ
λλλ λ λλλλ
λλλ
−−
ªº
−− −
ª
ºª º
«»
=−− =−⋅ −
«
»« »
«» −−
¬
¼¬ ¼
«»
¬¼
=−⋅ − =++
=+−−
15. Use the fact that the determinant of a triangular matrix is the product of the diagonal entries:
286 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
2
5502
02 3 6
det( ) det (5 ) (2 )(3 )
003 2
0005
AI
λ
λ
λλλλ
λ
λ
ªº
«»
−−
«»
==−−
«»
−−
«»
¬¼
The eigenvalues are 5, 5, 2, and 3.
16. The determinant of a triangular matrix is the product of its diagonal entries:
3000
62 0 0
det( ) det (3 )(2 )(6 )( 5 )
036 0
2335
AI
λ
λ
λλλλλ
λ
λ
ªº
«»
«»
==−−−
«»
«»
−−
¬¼
The eigenvalues are 3, 2, 6, and
5.
17. The determinant of a triangular matrix is the product of its diagonal entries:
22
30000
51 0 0 0
(3 ) (1 ) ( )
380 00
0721 0
41 9 23
λ
λ
λλλ
λ
λ
λ
ªº
«»
−−
«»
«»
=−−
«»
−−
«»
«»
−−
¬¼
The eigenvalues are 3, 3, 1, 1, and 0.
18. Row reduce the augmented matrix for the equation (4)AI=
x0
:
02330 02 3 30 02 3 00
02 30 00 360 00 300
000140 00 0 10 00 0 10
00020 00 0 00 00 0 00
hh h
ªºª ºª º
«»« »« »
++
«»« »« »
«»« »« »
«»« »« »
¬¼¬ ¼¬ ¼

For a two-dimensional eigenspace, the system above needs two free variables. This happens if and
only if
3.h=
19. Since the equation
12
det( ) ( )( ) ( )−λ =λ−λ λ −λ λ λ"
n
AI holds for all
λ
, set
0λ=
and conclude
that
12
det .=λλ λ"
n
A
20. det( ) det( )
TTT
AI AI
−λ =−λ
5.2 Solutions 287
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
det( ) Transpose property=−λ
T
AI
det( ) Theorem 3(c)=AI
λ
21. a. False. See Example 1.
b. False. See Theorem 3.
c. True. See Theorem 3.
d. False. See the solution of Example 4.
22. a. False. See the paragraph before Theorem 3.
b. False. See Theorem 3.
c. True. See the paragraph before Example 4.
d. False. See the warning after Theorem 4.
23. If ,=AQR with Q invertible, and if
1
,=ARQ
then write
11
1
,
−−
==
AQQRQQAQ
which shows that
1
A is similar to A.
24. First, observe that if P is invertible, then Theorem 3(b) shows that
11
1det det( )(det)(det )
IPP PP
−−
== =
Use Theorem 3(b) again when
1
,
=
APBP
111
det det( ) (det )(det )(det ) (det )(det )(det ) det
APBP PBP BPP B
−−−
== = =
25. Example 5 of Section 4.9 showed that
11
,=vvA which means that
1
v is an eigenvector of A
corresponding to the eigenvalue 1.
a. Since A is a
22×
matrix, the eigenvalues are easy to find, and factoring the characteristic
polynomial is easy when one of the two factors is known.
2
63
det ( 6 )( 7 ) ( 3)( 4) 1 3 3 ( 1)( 3)
47
λλλ λ λ λλ
λ
..
ªº
=. .−−..= .+.=−−.
«»
..
¬¼
The eigenvalues are 1 and .3. For the eigenvalue .3, solve
(3)
AI
.=x0
:
63 3 0 3 30 110
4730440000
... ..
ªºªºªº
=
«»«»«»
.....
¬¼¬¼¬¼
Here
12
0,xx+=
with
2
x free. The general solution is not needed. Set
2
1x= to find an
eigenvector
2
1.
1
ª
º
=
«
»
¬
¼
v A suitable basis for
2
R
is
12
{}.,vv
b. Write
01 2
c=+xv v
: 12 37 1 .
12 47 1
//
ªºª º ªº
=+
«»« » «»
//
¬¼¬ ¼ ¬¼
c By inspection, c is
114./
(The value of c depends
on how
2
v is scaled.)
c. For
12 ,=, ,
k… define
0
.=
xx
k
kA
Then
112121 2
() (3),Ac AcA c=+=+ =+.xvv vvv v
because
1
v and
2
v are eigenvectors. Again
21 1 2 1 21 2
((3)) (3) (3)(3)AA c AcA c== +. =+. =+...xx v v v vv v
288 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Continuing, the general pattern is
12
(3) .
k
kc
=+.
xv v
As k increases, the second term tends to 0
and so
k
x tends to
1
.v
26. If
0,
a then
1
,
0
ªº
ªº
==
«»
«»
¬¼
¬¼
ab
ab
AU
cd dcab
and
1
det ( )( ) .
==
Aadcabadbc
If
0,=
a
then 0
0
bcd
AU
cd b
ªºªº
==
«»«»
¬¼¬¼
(with one interchange), so
1
det ( 1) ( ) 0 .===
Acbbcadbc
27. a.
11
,A=vv
22
5,A=.vv
33
2.A=.vv
b. The set
123
{},,vv v is linearly independent because the eigenvectors correspond to different
eigenvalues (Theorem 2). Since there are three vectors in the set, the set is a basis for
3
. So there
exist unique constants such that
0112233
,cc c=+ +xvvv
and
01 12 23 3
.
TTT T
cc c
=+ +
wx wv wv wv
Since
0
x and
1
v are probability vectors and since the entries in
2
v and
3
v sum to 0, the above
equation shows that
1
1.c=
c. By (b),
0112233
.=+ +xvvvcc c
Using (a),
01 12 23 3 12 23 3 1
(5) (2) as== + + =+.+.→→
xx v v vv v vv
kkk k k k
kAcAcAcA c c k
28. [M] Answers will vary, but should show that the eigenvectors of A are not the same as the
eigenvectors of ,
T
A
unless, of course, .=
T
AA
29. [M] Answers will vary. The product of the eigenvalues of A should equal det A.
30. [M] The characteristic polynomials and the eigenvalues for the various values of a are given in the
following table:
a Characteristic Polynomial Eigenvalues
31.8
23
426 4ttt..+
312791 1279.,,.
31.9
23
838 4ttt..+ 2.7042, 1, .2958
32.0
23
25 4ttt+ 2, 1, 1
32.1
23
32 62 4ttt..+
15 9747 1
i
. ,
32.2
23
44 74 4ttt..+
15 14663 1
i
. ,
The graphs of the characteristic polynomials are:
Copyright © 2012 Pea
r
Notes
: An appendix in Section 5.3 of t
h
with integer coefficients, in case you
w
perhaps
44×
matrices.
The MATLAB box for Section 5.3
of the characteristic polynomial of the
m
the characteristic polynomial. (This is
n
have corresponding information. The a
p
list the coefficients of the characteristic
p
5.3 SOLUTIONS
1. 57 20
23 01
ªºªº
=,=,=
«»«»
¬¼¬¼
PDAPDP
14
37 160
,,
25 01
ªºªº
==
«»«»
¬¼¬¼
PD
a
2. 12 10
23 03
PDAPDP
ªºªº
=,=,=
«»«»
¬¼¬¼
14
32 10
,
21 081
PD
ªºªº
=,=
«»«»
¬¼¬¼
a
n
3.
1
10 0
210
k
kk
k
a
APDP
b
ªº
«»
«»
«»
¬¼
ªº ª
==
«» «
¬¼ ¬
4.
1
32(3)
210(
k
kk
APDP
ª
«
«
«
¬
−−
ªº
==
«»
¬¼
5. By the Diagonalization Theorem, e
i
correspond respectively to the eige
n
5.3 Sol
u
r
son Education, Inc. Publishing as Addison-Wesley.
h
e Study Guide gives an example of factoring a cubi
c
w
ant your students to find integer eigenvalues of si
m
introduces the command poly (A), which lists th
e
m
atrix A, and it gives MATLAB code that will produ
c
n
eeded for Exercise 30.) The Maple and Mathematic
a
p
pendices for the TI calculators contain only the co
m
p
olynomial.
1
,
and
441
.
=APDP
We compute
nd
4
57160 3 7 226 525
230 1 2 5 90 209
A
−−
ª
ºª ºª º ª º
==
«
»« »« » « »
−−
¬
¼¬ ¼¬ ¼ ¬ ¼
1
,
and
441
.
=APDP
We compute
n
d
4
121 0 3 2 321 160
230812 1 480 239
A
−−
ª
ºª ºª º ª º
==
«
»« »« » « »
−−
¬
¼¬ ¼¬ ¼ ¬ ¼
10 0
.
21 22
k
kk
k
a
abb
ªº
«»
«»
«»
¬¼
º
=
»
¼
0123(3)4(2)6(3)6
23
2) 2(3) 2(2) 4(3) 3
kk
kkk
k
k
º
»
»
»
¼
ª
−−+⋅− ⋅− −
ªº
«
=
«»
«
¬¼
−⋅+⋅− ⋅− −
¬
i
genvectors form the columns of the left factor, and t
h
n
values on the diagonal of the middle factor.
u
tions 289
c
polynomial
m
ple
33×
or
e
coefficients
c
e a graph of
a
appendices
m
mands that
(2) .
(2)
k
k
º
»
»
¼
h
ey
290 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
110
2 1 31 1
101
ªº ªºªº
«» «»«»
=: ;=:,
«» «»«»
«» «»«»
¬¼ ¬¼¬¼
6. As in Exercise 5, inspection of the factorization gives:
031
41 30 3
010
ªº ªºª º
«» «»« »
=: ;=: ,
«» «»« »
«» «»« »
¬¼ ¬¼¬ ¼
7. Since A is triangular, its eigenvalues are obviously
1.±
For λ = 1: 00
1.
62
ªº
=«»
¬¼
AI The equation
(1)AI=x0
amounts to
12
62 0,xx= so
12
(1 3)xx=/
with
2
x free. The general solution is
2
13 ,
1
/
ª
º
«
»
¬
¼
x and a nice basis vector for the eigenspace is
1
1.
3
ª
º
=
«
»
¬
¼
v
For λ = 1: 20
1.
60
ª
º
+=
«
»
¬
¼
AI The equation
(1)AI+=x0
amounts to
1
20,=x so
1
0x= with
2
x
free. The general solution is
2
0,
1
ªº
«»
¬¼
x and a basis vector for the eigenspace is
2
0.
1
ªº
=«»
¬¼
v
From
1
v and
2
v construct
12
10
.
31
ªº
«»
¬¼
ª
º
==
«
»
¬
¼
vvP Then set 10
,
01
ª
º
=
«
»
¬
¼
D where the eigenvalues
in D correspond to
1
v and
2
v respectively.
8. Since A is triangular, its only eigenvalue is obviously 3.
For λ = 3: 02
3.
00
AI
ªº
=«»
¬¼
The equation
(3)AI=x0
amounts to
2
0,=x so
2
0x= with
1
x free.
The general solution is
1
1.
0
ªº
«»
¬¼
x Since we cannot generate an eigenvector basis for
2
, A is not
diagonalizable.
9. To find the eigenvalues of A, compute its characteristic polynomial:
22
2 1
det( )det (2 )(4 )(1)(1) 69( 3)
14
AI
−−
ªº
==−−=+=
«»
¬¼
Thus the only eigenvalue of A is 3.
For λ = 3: 11
4.
11
−−
ªº
=«»
¬¼
AI The equation
(3)AI=x0
amounts to
12
0,+=xx so
12
xx= with
2
x free. The general solution is
2
1.
1
ªº
«»
¬¼
x Since we cannot generate an eigenvector basis for
2
, A is
not diagonalizable.
5.3 Solutions 291
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
10. To find the eigenvalues of A, compute its characteristic polynomial:
2
1 3
det( )det (1 )(2 )(3)(4) 310 ( 5)( 2)
42
AI
ªº
==−−=−−=+
«»
¬¼
Thus the eigenvalues of A are 5 and
2
.
For λ = 2: 33
2.
44
AI
ª
º
+=
«
»
¬
¼ The equation
(1)AI+=x0
amounts to
12
0,xx+=
so
12
xx= with
2
x free. The general solution is
2
1,
1
x
ªº
«»
¬¼
and a nice basis vector for the eigenspace is
1
1.
1
ªº
=«»
¬¼
v
For λ = 5: 43
5.
43
AI
ªº
=«»
¬¼
The equation
(3)AI=x0
amounts to
12
43 0,xx+=
so
12
(3 / 4)xx= with
2
x free. The general solution is
2
3/4 ,
1
x
ª
º
«
»
¬
¼ and a basis vector for the eigenspace is
2
3.
4
ªº
=«»
¬¼
v
From
1
v and
2
v construct
12
13
.
14
P
ªº
«»
¬¼
ª
º
==
«
»
¬
¼
vv Then set 20
,
05
D
ª
º
=
«
»
¬
¼ where the eigenvalues
in D correspond to
1
v and
2
v respectively.
11. The eigenvalues of A are given to be -1, and 5.
For λ = -1:
111
222,
333
AI
ªº
«»
+=
«»
«»
¬¼
and row reducing
[]
AI+0
yields
1110
0000.
0000
ª
º
«
»
«
»
«
»
¬
¼
The general
solution is
23
11
10,
01
xx
−−
ªº ªº
«» «»
+
«» «»
«» «»
¬¼ ¬¼
and a nice basis for the eigenspace is
{}
12
11
,1,0
01
½
−−
ªºªº
°°
«»«»
=®¾
«»«»
°°
«»«»
¬¼¬¼
¯¿
vv
.
For λ = 5:
511
5242,
333
AI
ªº
«»
=
«»
«»
¬¼
and row reducing
[]
5AI0
yields
10 130
01 2/30.
00 00
/
ª
º
«
»
«
»
«
»
¬
¼
The
general solution is
3
13
2/3 ,
1
x
/
ª
º
«
»
«
»
«
»
¬
¼
and a nice basis vector for the eigenspace is
3
1
2.
3
ªº
«»
=«»
«»
¬¼
v
From
12
,vv
and
3
v construct
123
111
102.
013
P
ªº
«»
¬¼
−−
ª
º
«
»
==
«
»
«
»
¬
¼
vvv
Then set
D
=
100
010,
005
ªº
«»
«»
«»
¬¼
where the eigenvalues in D correspond to
12
,vv
and
3
v respectively. Note that if changing the
order of the vectors in P also changes the order of the diagonal elements in D, and results in the
answer given in the text.
292 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
12. The eigenvalues of A are given to be 2 and 5.
For λ = 2:
111
2111,
111
AI
ªº
«»
=«»
«»
¬¼
and row reducing
[]
2AI0
yields
1110
0000.
0000
ª
º
«
»
«
»
«
»
¬
¼
The general
solution is
23
11
10,
01
xx
−−
ªº ªº
«» «»
+
«» «»
«» «»
¬¼ ¬¼
and a nice basis for the eigenspace is
{}
12
11
,1,0.
01
½
−−
ªºªº
°°
«»«»
=®¾
«»«»
°°
«»«»
¬¼¬¼
¯¿
vv
.
For λ = 5:
21 1
5121,
11 2
AI
ªº
«»
=
«»
«»
¬¼
and row reducing
[]
5AI0
yields
10 10
01 10.
00 00
ªº
«»
«»
«»
¬¼
The
general solution is
3
1
1,
1
x
ª
º
«
»
«
»
«
»
¬
¼
and a basis for the eigenspace is
3
1
1.
1
ª
º
«
»
=
«
»
«
»
¬
¼
v
From
12
,vv
and
3
v construct
123
111
101.
011
P
ªº
«»
¬¼
−−
ª
º
«
»
==
«
»
«
»
¬
¼
vvv
Then set
200
020,
005
D
ªº
«»
=«»
«»
¬¼
where
the eigenvalues in D correspond to
12
,vv
and
3
v respectively.
13. The eigenvalues of A are given to be 5 and 1.
For λ = 5:
321
5121,
123
AI
−−
ªº
«»
=−−
«»
«»
−−
¬¼
and row reducing
[]
5AI0
yields
1010
0110.
0000
ª
º
«
»
«
»
«
»
¬
¼
The
general solution is
3
1
1,
1
ª
º
«
»
«
»
«
»
¬
¼
x
and a basis for the eigenspace is
1
1
1.
1
ª
º
«
»
=
«
»
«
»
¬
¼
v
For λ = 1:
121
1121,
121
ªº
«»
=
«»
«»
−−
¬¼
AI
and row reducing
[]
AI0
yields
12 10
00 00.
00 00
ª
º
«
»
«
»
«
»
¬
¼
The
general solution is
23
21
10,
01
ª
ºªº
«
»«»
+
«
»«»
«
»«»
¬
¼¬¼
xx
and a basis for the eigenspace is
23
21
{} 10.
01
½
ªºªº
°°
«»«»
,= ,
®¾
«»«»
°°
«»«»
¬¼¬¼
¯¿
vv
From
12
,vv
and
3
v construct
123
121
110.
101
ªº
«»
¬¼
−−
ª
º
«
»
==
«
»
«
»
¬
¼
vvv
P
Then set
500
010,
001
ªº
«»
=«»
«»
¬¼
D
where the eigenvalues in D correspond to
12
,vv
and
3
v respectively.
5.3 Solutions 293
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
14. The eigenvalues of A are given to be 2 and 3.
For λ = 2:
00 2
2112,
00 1
AI
ªº
«»
=«»
«»
¬¼
and row reducing
[
]
2AI0
yields
1100
0010.
0000
ª
º
«
»
«
»
«
»
¬
¼
The
general solution is
2
1
1,
0
x
ª
º
«
»
«
»
«
»
¬
¼
and a basis for the eigenspace is
1
1
1
0
ª
º
«
»
=
«
»
«
»
¬
¼
v
.
For λ = 3:
10 2
3102,
00 0
AI
−−
ªº
«»
=«»
«»
¬¼
and row reducing
[]
3AI0
yields
1020
0000.
0000
ª
º
«
»
«
»
«
»
¬
¼
The
general solution is
23
02
10,
01
xx
ª
ºªº
«
»«»
+
«
»«»
«
»«»
¬
¼¬¼
and a nice basis for the eigenspace is
23
02
{}10.
01
½
ªºª º
°°
«»« »
,= ,
®¾
«»« »
°°
«»« »
¬¼¬ ¼
¯¿
vv
From
12
,vv
and
3
v construct
123
10 2
11 0.
00 1
P
ªº
«»
¬¼
−−
ª
º
«
»
==
«
»
«
»
¬
¼
vvv
Then set
200
030,
003
D
ªº
«»
=«»
«»
¬¼
where the eigenvalues in D correspond to
12
,vv
and
3
v respectively.
15. The eigenvalues of A are given to be 0 and 1.
For λ = 0:
011
0121,
110
AI
−−
ªº
«»
=«»
«»
−−
¬¼
and row reducing
[
]
0AI0
yields
10 10
01 1 0.
00 0 0
ª
º
«
»
«
»
«
»
¬
¼
The
general solution is
3
1
1,
1
x
ª
º
«
»
«
»
«
»
¬
¼
and a basis for the eigenspace is
1
1
1
1
ª
º
«
»
=
«
»
«
»
¬
¼
v
For λ = 1:
111
111,
111
AI
−−−
ªº
«»
=«»
«»
−−−
¬¼
and row reducing
[]
AI0
yields
1110
0000.
0000
ª
º
«
»
«
»
«
»
¬
¼
The
general solution is
23
11
10,
01
xx
−−
ª
ºªº
«
»«»
+
«
»«»
«
»«»
¬
¼¬¼
and a basis for the eigenspace is
23
11
{} 10.
01
½
−−
ªºªº
°°
«»«»
,= ,
®¾
«»«»
°°
«»«»
¬¼¬¼
¯¿
vv
From
12
,vv
and
3
v construct
123
111
110.
10 1
P
ªº
«»
¬¼
−−
ª
º
«
»
==
«
»
«
»
¬
¼
vvv
Then set
000
010,
001
D
ªº
«»
=«»
«»
¬¼
where the eigenvalues in D correspond to
12
,vv
and
3
v respectively. Note that the answer for P
given in the text has the first column scaled by
1
, which is also a correct answer, since any nonzero
multiple of an eiegnvector is an eigenvector.
16. The only eigenvalue of A given is 0.
294 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
For λ = 0:
12 3
0252,
13 1
AI
ªº
«»
=
«»
«»
¬¼
and row reducing
[
]
0AI0
yields
10 110
01 4 0.
00 0 0
ª
º
«
»
«
»
«
»
¬
¼
The
general solution is
2
11
4,
1
x
ª
º
«
»
«
»
«
»
¬
¼
and a basis for the eigenspace is
1
11
4.
1
ª
º
«
»
=
«
»
«
»
¬
¼
v
Since
0=
has only a one-dimensional eigenspace, we can find at most one linearly independent
eigenvector for A, so A is not diagonalizable over the real numbers. The remaining eigenvalues are
complex, and this situation is dealt with in Section 5.
17. Since A is triangular, its eigenvalue is obviously 2.
For λ = 2:
000
2200,
220
AI
ªº
«»
=«»
«»
¬¼
and row reducing
[]
2AI0
yields
1000
0100.
0000
ª
º
«
»
«
»
«
»
¬
¼
The general
solution is
3
0
0,
1
x
ªº
«»
«»
«»
¬¼
and a basis for the eigenspace is
1
0
0.
1
ª
º
«
»
=
«
»
«
»
¬
¼
v
Since
2=
has only a one-dimensional eigenspace, we can find at most one linearly independent
eigenvector for A, so A is not diagonalizable.
18. The eigenvalues of A are given to be -2, -1 and 0.
For λ = -2:
422
2312,
220
AI
−−
ª
º
«
»
+= −−
«
»
«
»
¬
¼
and row reducing
[]
2AI+0
yields
10 10
01 10.
00 0 0
ª
º
«
»
«
»
«
»
¬
¼
The
general solution is
3
1
1,
1
x
ª
º
«
»
«
»
«
»
¬
¼
and a basis for the eigenspace is
1
1
1
1
ª
º
«
»
=
«
»
«
»
¬
¼
v
.
For λ = -1:
322
322,
221
AI
−−
ªº
«»
+= −−
«»
«»
−−
¬¼
and row reducing
[]
AI+0
yields
10 1 0
01 1/20.
00 0 0
ª
º
«
»
«
»
«
»
¬
¼
The
general solution is
3
1
1/2 ,
1
x
ª
º
«
»
«
»
«
»
¬
¼
and a basis for the eigenspace is
2
2
1
2
ª
º
«
»
=
«
»
«
»
¬
¼
v
.
For λ = 0:
222
0332,
222
AI
−−
ªº
«»
=−−
«»
«»
−−
¬¼
and row reducing
[]
0AI0
yields
1100
0010.
0000
ª
º
«
»
«
»
«
»
¬
¼
The
general solution is
2
1
1,
0
x
ª
º
«
»
«
»
«
»
¬
¼
and a nice basis vector for the eigenspace is
3
1
1.
0
ªº
«»
=«»
«»
¬¼
v
5.3 Solutions 295
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
From
12
,vv
and
3
v construct
123
12 1
111.
120
P
ªº
«»
¬¼
ª
º
«
»
==
«
»
«
»
¬
¼
vvv
Then set
200
010,
000
D
ªº
«»
=
«»
«»
¬¼
where
the eigenvalues in D correspond to
12
,vv
and
3
v respectively.
19. Since A is triangular, its eigenvalues are 2, 3, and 5.
For
= 2:
3309
0112
2,
0000
0000
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
AI
and row reducing
[]
2 AI0
yields
10 1 10
011 20
.
000 00
000 00
ªº
«»
«»
«»
«»
«»
¬¼
The general solution is
34
11
12
,
10
01
−−
ªº ªº
«» «»
«» «»
+
«» «»
«» «»
«» «»
¬¼ ¬¼
xx
and a nice basis for the eigenspace is
12
11
12
{} .
10
01
½−−
ªºªº
°°
«»«»
°°
«»«»
,= ,
®¾
«»«»
°°
«»«»
°°
«»«»
¬¼¬¼
¯¿
vv
For λ = 3:
2309
0012
3,
0010
000 1
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
AI
and row reducing
[]
3 AI0
yields
132000
00100
.
00010
00000
/
ªº
«»
«»
«»
«»
«»
¬¼
The general solution is
2
32
1,
0
0
/
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
x
and a nice basis for the eigenspace is
3
3
2.
0
0
ªº
«»
«»
=«»
«»
«»
¬¼
v
For λ = 5:
0309
0212
5,
0030
0003
ª
º
«
»
−−
«
»
=
«
»
«
»
«
»
¬
¼
AI
and row reducing
[]
5 AI
0
yields
01000
00100
.
00010
00000
ªº
«»
«»
«»
«»
«»
¬¼
The general solution is
1
1
0,
0
0
ªº
«»
«»
«»
«»
«»
¬¼
x
and a basis for the eigenspace is
4
1
0.
0
0
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
v
296 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
From
123
,,vv v
and
4
v construct
1234
1131
1220
.
1000
0100
ªº
¬¼
−−
ª
º
«
»
«
»
==
«
»
«
»
«
»
¬
¼
vv v v
P
Then set
2000
0200
,
0030
0005
ªº
«»
«»
=«»
«»
«»
¬¼
D
where the eigenvalues in D correspond to
12
,vv
,
3
v and
4
v respectively.
Note that this answer differs from the text. There,
[]
4312
P=
vvvv
and the entries in D are
rearranged to match the new order of the eigenvectors. According to the Diagonalization Theorem,
both answers are correct.
20. Since A is triangular, its eigenvalues are 2 and 3.
For λ = 2:
1000
0000
2,
0000
100 1
AI
ªº
«»
«»
=«»
«»
¬¼
and row reducing
[]
2 AI
0
yields
10000
00010
.
00000
00000
ª
º
«
»
«
»
«
»
«
»
¬
¼
The
general solution is
23
00
10
,
01
00
xx
ª
ºªº
«
»«»
«
»«»
+
«
»«»
«
»«»
¬
¼¬¼
and a basis for the eigenspace is
{}
12
00
10.
01
00
½
ªºªº
°°
«»«»
°°
«»«»
,= ,
®¾
«»«»
°°
«»«»
°°
¬¼¬¼
¯¿
vv
For λ = 3:
00 00
0100
3,
00 10
10 00
AI
ªº
«»
«»
=«»
«»
¬¼
and row reducing
[]
3 AI
0
yields
10000
01000
.
00100
00000
ªº
«»
«»
«»
«»
¬¼
The general solution is
4
0
0,
0
1
x
ªº
«»
«»
«»
«»
¬¼
and a basis for the eigenspace is
3
0
0.
0
1
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
v
Since
3=
has only a one-dimensional eigenspace, we can find at most three linearly independent
eigenvectors for A, so A is not diagonalizable.
21. a. False. The symbol D does not automatically denote a diagonal matrix.
b. True. See the remark after the statement of the Diagonalization Theorem.
c. False. The
33×
matrix in Example 4 has 3 eigenvalues, counting multiplicities, but it is not
diagonalizable.
d. False. Invertibility depends on 0 not being an eigenvalue. (See the Invertible Matrix Theorem.)
A diagonalizable matrix may or may not have 0 as an eigenvalue. See Examples 3 and 5 for both
possibilities.
22. a. False. The n eigenvectors must be linearly independent. See the Diagonalization Theorem.
5.3 Solutions 297
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b. False. The matrix in Example 3 is diagonalizable, but it has only 2 distinct eigenvalues. (The
statement given is the converse of Theorem 6.)
c. True. This follows from
AP PD=
and formulas (1) and (2) in the proof of the Diagonalization
Theorem.
d. False. See Example 4. The matrix there is invertible because 0 is not an eigenvalue, but the matrix
is not diagonalizable.
23. A is diagonalizable because you know that five linearly independent eigenvectors exist: three in the
three-dimensional eigenspace and two in the two-dimensional eigenspace. Theorem 7 guarantees that
the set of all five eigenvectors is linearly independent.
24. No, by Theorem 7(b). Here is an explanation that does not appeal to Theorem 7: Let
1
v and
2
v be
eigenvectors that span the two one-dimensional eigenspaces. If v is any other eigenvector, then it
belongs to one of the eigenspaces and hence is a multiple of either
1
v or
2
.v So there cannot exist
three linearly independent eigenvectors. By the Diagonalization Theorem, A cannot be
diagonalizable.
25. Let
1
{}v be a basis for the one-dimensional eigenspace, let
2
v and
3
v form a basis for the two-
dimensional eigenspace, and let
4
v be any eigenvector in the remaining eigenspace. By Theorem 7,
1234
{ },,,vv v v is linearly independent. Since A is
44,×
the Diagonalization Theorem shows that
A is diagonalizable.
26. Yes, if the third eigenspace is only one-dimensional. In this case, the sum of the dimensions of the
eigenspaces will be six, whereas the matrix is
77.×
See Theorem 7(b). An argument similar to that
for Exercise 24 can also be given.
27. If A is diagonalizable, then
1
APDP
=
for some invertible P and diagonal D. Since A is invertible, 0
is not an eigenvalue of A. So the diagonal entries in D (which are eigenvalues of A) are not zero, and
D is invertible. By the theorem on the inverse of a product,
111111111
()()A PDP P D P PD P
−−
== =
Since
1
D
is obviously diagonal,
1
A
is diagonalizable.
28. If A has n linearly independent eigenvectors, then by the Diagonalization Theorem,
1
APDP
=
for
some invertible P and diagonal D. Using properties of transposes,
11
11
()()
()
−−
−−
==
==
TTTTT
TT
APDP PDP
PDPQDQ
where
1
().
=
T
QP
Thus
T
A
is diagonalizable. By the Diagonalization Theorem, the columns of Q
are n linearly independent eigenvectors of .
T
A
29. The diagonal entries in
1
D are reversed from those in D. So interchange the (eigenvector) columns of
P to make them correspond properly to the eigenvalues in
1
.D In this case,
11
11 30
and
21 05
PD
ªº ªº
==
«» «»
−−
¬¼ ¬¼
298 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Although the first column of P must be an eigenvector corresponding to the eigenvalue 3, there is
nothing to prevent us from selecting some multiple of 1,
2
ª
º
«
»
¬
¼ say 3,
6
ª
º
«
»
¬
¼ and letting
2
31
.
61
ªº
=«»
¬¼
P
We now have three different factorizations or “diagonalizations” of A:
11 1
111 212
APDP PDP PDP
−− −
== =
30. A nonzero multiple of an eigenvector is another eigenvector. To produce
2
,P simply multiply one or
both columns of P by a nonzero scalar other than 1.
31. For a
22
×
matrix A to be invertible, its eigenvalues must be nonzero. A first attempt at a
construction might be something such as 23
,
04
ª
º
«
»
¬
¼ whose eigenvalues are 2 and 4. Unfortunately, a
22
×
matrix with two distinct eigenvalues is diagonalizable (Theorem 6). So, adjust the construction
to 23
,
02
ªº
«»
¬¼
which works. In fact, any matrix of the form 0
ab
a
ª
º
«
»
¬
¼ has the desired properties when a
and b are nonzero. The eigenspace for the eigenvalue a is one-dimensional, as a simple calculation
shows, and there is no other eigenvalue to produce a second eigenvector.
32. Any
22
×
matrix with two distinct eigenvalues is diagonalizable, by Theorem 6. If one of those
eigenvalues is zero, then the matrix will not be invertible. Any matrix of the form 00
ab
ªº
«»
¬¼
has the
desired properties when a and b are nonzero. The number a must be nonzero to make the matrix
diagonalizable; b must be nonzero to make the matrix not diagonal. Other solutions are 00
ab
ªº
«»
¬¼
and 0.
0
ªº
«»
¬¼
a
b
33.
9424
56 32 28 44 ,
14 14 6 14
42 33 21 45
A
−−−
ªº
«»
−−
«»
=«»
−− −
«»
−−
¬¼
ev = eig(A)=(13,-12,-12, 13)
,
nulbasis(A-ev(1)*eye(4))
0.5000 0.3333
0 1.3333
.
1.0000 0
0 1.0000
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
A basis for the eigenspace of
11
04
13 is , .
20
03
½
ª
ºª º
°
°
«
»« »
°
°
«
»« »
λ=
®
¾
«
»« »
°
°
«
»« »
°
°
¬
¼¬ ¼
¯¿
5.3 Solutions 299
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
nulbasis(A-ev(2)*eye(4))
0.2857 0
10000 1.0000
10000 0
0 1.0000
ªº
«»
.
«»
«»
.
«»
¬¼
,
A basis for the eigenspace of
20
71
12 is , .
70
01
½
ª
ºª º
°
°
«
»« »
°
°
«
»« »
λ=
®
¾
«
»« »
°
°
«
»« »
°
°
¬
¼¬ ¼
¯¿
Thus we construct
1120
0471
,
2070
0301
P
ªº
«»
«»
=«»
«»
¬¼
and
13 0 0 0
013 0 0
.
00 12 0
00 0 12
D
ªº
«»
«»
=«»
«»
¬¼
Notice that the anwer in the text lists the eigenvector in an different
order in P, and hence the eigenvalues are listed in a different order in D. Both answers are correct.
34.
49782
790714
,
5105510
23704
313710 11
A
−−
ªº
«»
−−
«»
«»
=−−
«»
«»
«»
−− −
¬¼
ev = eig(A)
=(5,-2,-2,5,5)
,
nulbasis(A-ev(1)*eye(5))
2.0000 1.0000 2.0000
1.0000 1.0000 0
1.0000 0 0
01.0000 0
001.0000
ª
º
«
»
«
»
«
»
=,
«
»
«
»
«
»
¬
¼
A basis for the eigenspace of
212
110
5 is , .
100
010
001
½
ª
ºª ºª º
°
°
«
»« »« »
°
°
«
»« »« »
°
°
«
»« »« »
λ=,
®
¾
«
»« »« »
°
°
«
»« »« »
°
°
«
»« »« »
°
°
¬
¼¬ ¼¬ ¼
¯¿
nulbasis(A-ev(2)*eye(5))
0.4000 0.6000
1.4000 1.4000
1 0000 1.0000
10000 0
0 1.0000
ª
º
«
»
«
»
«
»
=.
«
»
.
«
»
«
»
¬
¼
,
300 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
A basis for the eigenspace of
23
77
2 is , .
55
50
05
½
ª
ºª º
°
°
«
»« »
°
°
«
»« »
°
°
«
»« »
λ=−−
®
¾
«
»« »
°
°
«
»« »
°
°
«
»« »
°
°
¬
¼¬ ¼
¯¿
Thus we construct
21223
11077
100 5 5
01050
00105
P
−−
ª
º
«
»
«
»
«
»
=−−
«
»
«
»
«
»
¬
¼
and
500 0 0
050 0 0
.
005 0 0
000 2 0
000 0 2
D
ª
º
«
»
«
»
«
»
=
«
»
«
»
«
»
¬
¼
35.
13 12 9 15 9
659159
,
6125 69
6129 89
61212 62
A
−−
ªº
«»
−−
«»
«»
=−−
«»
−−
«»
«»
−−
¬¼
ev = eig(A)
=(7, 14,-14,7,7)
,
nulbasis(A-ev(1)*eye(5))
2 0000 1 0000 1 5000
10000 0 0
010000 0
010000 0
0010000
...
ª
º
«
»
.
«
»
«
»
=,
.
«
»
.
«
»
«
»
.
¬
¼
A basis for the eigenspace of
21 3
10 0
7 is , .
01 0
01 0
00 2
½
ª
ºªºª º
°
°
«
»«»« »
°
°
«
»«»« »
°
°
«
»«»« »
λ=,
®
¾
«
»«»« »
°
°
«
»«»« »
°
°
«
»«»« »
°
°
¬
¼¬¼¬ ¼
¯¿
nulbasis(A-ev(2)*eye(5))
10
10
01
10
01
ª
º
«
»
«
»
«
»
=,
«
»
«
»
«
»
¬
¼
A basis for the eigenspace of
10
10
14 is , .
01
10
01
½
ª
ºª º
°
°
«
»« »
°
°
«
»« »
°
°
«
»« »
λ=
®
¾
«
»« »
°
°
«
»« »
°
°
«
»« »
°
°
¬
¼¬ ¼
¯¿
5.3 Solutions 301
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Thus we construct
21 31 0
10 0 1 0
,
01 00 1
01 01 0
00 20 1
P
ª
º
«
»
«
»
«
»
=
«
»
«
»
«
»
¬
¼
and
700 0 0
070 0 0
.
007 0 0
000 14 0
000 0 14
D
ª
º
«
»
«
»
«
»
=
«
»
«
»
«
»
¬
¼
36.
24 6 2 6 2
72 51 9 99 9
,
063156363
72 15 9 63 9
063216327
A
ªº
«»
«»
«»
=
«»
«»
«»
−−
¬¼
ev = eig(A)=(24,-48,36,-48,36)
,
nulbasis(A-ev(1)*eye(5))
1
1
0
1
0
ª
º
«
»
«
»
«
»
=,
«
»
«
»
«
»
¬
¼
A basis for the eigenspace of
1
1
24 is .
0
1
0
ª
º
«
»
«
»
«
»
λ=
«
»
«
»
«
»
¬
¼
nulbasis(A-ev(2)*eye(5)),
A basis for the eigenspace of
00
10
48 is .
01
10
01
½
ª
ºª º
°
°
«
»« »
°
°
«
»« »
°
°
«
»« »
λ=,
®
¾
«
»« »
°
°
«
»« »
°
°
«
»« »
°
°
¬
¼¬ ¼
¯¿
nulbasis(A-ev(3)*eye(5))
1 0000 0 3333
0 0000 1 0000
30000 00000
10000 0
010000
..
ª
º
«
»
..
«
»
«
»
=..
«
»
.
«
»
«
»
.
¬
¼
,
A basis for the eigenspace of
11
03
36 is , .
30
10
03
½
ª
ºª º
°
°
«
»« »
°
°
«
»« »
°
°
«
»« »
λ=
®
¾
«
»« »
°
°
«
»« »
°
°
«
»« »
°
°
¬
¼¬ ¼
¯¿
302 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Thus we construct
10 0 1 1
11 00 3
00 13 0
11 01 0
00 10 3
P
ª
º
«
»
«
»
«
»
=
«
»
«
»
«
»
¬
¼
and
.
24 0 0 0 0
048000
00 4800
00 0360
00 0036
D
ª
º
«
»
«
»
«
»
=
«
»
«
»
«
»
¬
¼
. Notice that
the anwer in the text lists the eigenvector in an different order in P, and hence the eigenvalues are listed
in a different order in D. Both answers are correct.
Notes
: For your use, here is another matrix with five distinct real eigenvalues. To four decimal places,
they are 11.0654, 9.8785, 3.8238,
37332,
.
and
60345..
68530
73530
37535
04175
53208
−−
ªº
«»
−−
«»
«»
−− −
«»
−−
«»
«»
−−
¬¼
The MATLAB box in the Study Guide encourages students to use eig (A) and nulbasis to
practice the diagonalization procedure in this section. It also remarks that in later work, a student may
automate the process, using the command []=PD eig (A). You may wish to permit students to use
the full power of eig in some problems in Sections 5.5 and 5.7.
5.4 SOLUTIONS
1. Since
1121
3
()3 5 [()] .
5
ªº
=,=
«»
¬¼
bddb
D
TT
Likewise
212
() 6T=+bdd
implies that
2
1
[( )] 6
D
T
ª
º
=
«
»
¬
¼
b
and
32
()4T=bd
implies that
3
0
[( )] .
4
ª
º
=
«
»
¬
¼
b
D
T Thus the matrix for T relative to
B
and
123
310
is [ ( )] [ ( )] [ ( )] .
564
ªº
¬¼
ªº
=«»
¬¼
bb b
DDD
DT T T
2.Since
1121
3
()3 3 [()] .
3
B
TT
ªº
=,=
«»
¬¼
dbbd Likewise
212
() 2 5T=+dbb
implies that
2
2
[( )] .
5
B
T
ª
º
=
«
»
¬
¼
d
Thus the matrix for T relative to
D
and
12
32
is [ ( )] [ ( )] .
35
BB
BT T
ªº
¬¼
ª
º
=
«
»
¬
¼
dd
5.4 Solutions 303
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
3. a.
11232 1233123
() 0 0 ( ) 2 0 () 2 0 3TT T=+, =−− +, = + +ebbbe bbbe bbb
b.
12 3
012
[( )] 0 [( )] 2 [( )] 0
103
BB B
TT T
ªº ª º ªº
«» « » «»
=, =,=
«» « » «»
«» « » «»
¬¼ ¬ ¼ ¬¼
ee e
.
c. The matrix for T relative to
!
and
123
012
is [ [ ( )] [ ( )] [ ( )] ] 0 2 0 .
103
BBB
BT T T
ªº
«»
=
«»
«»
¬¼
ee e
4. Let
12
{}=,ee! be the standard basis for . Since
11 2 2
23
[( )] ( ) [( )] ( ) ,
20
TT T T
ª
ºªº
==, = =
«
»«»
¬
¼¬¼
bb b b
!!
and
33
1
[( )] ( ) ,
5
TT
ªº
==
«»
¬¼
bb
!
the matrix for T relative to
B
and
123
is [[ ( )] [ ( )] [ ( )] ] =bb bTT T
!!!
! 231
.
205
ª
º
«
»
¬
¼
5. a.
223
() ( 3)(3 2 ) 9 3Tt tt ttt=+ +=++p
b. Let
p
and
q
be polynomials in
2
, and let
c
be any scalar. Then
(() ()) ( 3)[() ()] ( 3)() ( 3)()
(()) (())
Tt t t t t t t t t
Tt Tt
+=+ +=+ ++
=+
pq pq p q
pq
(())(3)[()] (3)()
[()]
Tc t t c t c t t
cT t
=+ =+
=
ppp
p
and T is a linear transformation.
c. Let
2
{1 }Btt=,,
and
23
{1 } .=,,,Cttt
Since
11
3
1
() (1)( 3)(1) 3 [()] .
0
0
C
TTt tT
ªº
«»
«»
==+ =+, =
«»
«»
¬¼
bb
Likewise since
2
22
0
3
() ()( 3)() 3 [()] ,
1
0
C
TTtttttT
ª
º
«
»
«
»
==+ =+, =
«
»
«
»
¬
¼
bb
and since
304 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
2232
33
0
0
() ()( 3)() 3 [()] .
3
1
C
TTtttttT
ª
º
«
»
«
»
==+ =+, =
«
»
«
»
¬
¼
bb
Thus the matrix for T relative to B and
123
300
130
is [ [ ( )] [ ( )] [ ( )] ] .
013
001
CCC
CT T T
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
bb b
6. a.
22 2 234
() (3 2 ) 2 (3 2 ) 3 2 7 4 2Tttttttttt=++ +=++
p
b. Let
p
and
q
be polynomials in
2
, and let
c
be any scalar. Then
2
22
2
2
(() ()) [() ()] 2 [() ()]
[() 2 ()] [() 2 ()]
(()) (())
(())[()]2[()]
[() 2 ()]
[()]
Tt t t t t t t
ttt ttt
Tt Tt
Tc t c t t c t
ct tt
cT t
+=++ +
=+ ++
=+
=+
=+
=
pq pq pq
ppqq
pq
pp p
pp
p
and T is a linear transformation.
c. Let
2
{1 }Btt=,,
and
234
{1 } .=,,,,Ctttt
Since
22
11
1
0
() (1)12(1)12 [()] .
2
0
0
C
TT t tT
ª
º
«
»
«
»
«
»
==+ =+, =
«
»
«
»
«
»
¬
¼
bb
Likewise since
23
22
0
1
() () (2)() 2[()] ,
0
2
0
C
TTttttttT
ª
º
«
»
«
»
«
»
==+ =+, =
«
»
«
»
«
»
¬
¼
bb and
since
22 222 4
33
0
0
() () (2)() 2 [()] .
1
0
2
C
TTttttttT
ª
º
«
»
«
»
«
»
==+ =+, =
«
»
«
»
«
»
¬
¼
bb Thus the matrix for T relative to
B
and
123
100
010
is [ [ ( )] [ ( )] [ ( )] ] .
201
020
002
CCC
CT T T
ª
º
«
»
«
»
«
»
=
«
»
«
»
«
»
¬
¼
bbb
5.4 Solutions 305
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
7. Since
11
3
() (1)35[()] 5.
0
ª
º
«
»
==+, =
«
»
«
»
¬
¼
bb
B
TT tT
Likewise since
2
22
0
() () 2 4[()] 2,
4
ªº
«»
==+, =
«»
«»
¬¼
bb
B
TTtttT
and since
22
33
0
() () [()] 0.
1
ªº
«»
==, =
«»
«»
¬¼
bb
B
TTttT
Thus the
matrix representation of T relative to the basis
B
is
123
300
[( )] [( )] [( )] 5 2 0.
041
ªº
¬¼
ª
º
«
»
=
«
»
«
»
¬
¼
bb b
BBB
TT T
Perhaps a faster way is to realize that the
information given provides the general form of
()Tp
as shown in the figure below:
22
01 2 0 0 1 12
coordinate coordinate
mapping mapping
00
multiplication
101
by[ ]
212
3(52)(4 )
3
52
4
ªº ª º
«» « »
«» « »
«» « »
«» « »
«» « »
«» « »
¬¼ ¬ ¼
++ → +++
→ −
+
B
T
T
aatat a a at aat
aa
aaa
aaa
The matrix that implements the multiplication along the bottom of the figure is easily filled in by
inspection:
00
101
212
3300
52 implies that [] 5 20
4041
B
aa
aaa T
aaa
ªºª º
«»« »
«»« »
«»« »
«»« »
«»« »
«»« »
¬¼¬ ¼
???
ªº ª º
«» « »
??? = =
«» « »
«» « »
??? +
¬¼ ¬ ¼
8. Since
12
4
[4 3 ] 3 ,
0
B
ª
º
«
»
=
«
»
«
»
¬
¼
bb
12 12
00 1 4 0
[(4 3 )] [][4 3 ] 2 1 2 3 5
13 1 0 5
BB B
TT
ª
ºª º ª º
«
»« » « »
==−−=
«
»« » « »
«
»« » « »
¬
¼¬ ¼ ¬ ¼
bb bb
and
12 23
(4 3 ) 5 5 .
T
=bb b b
9. a.
53(1) 2
() 5 3(0) 5
53(1) 8
T
+
ªºªº
«»«»
=+ =
«»«»
«»«»
+
¬¼¬¼
p
b. Let p and q be polynomials in
2
, and let c be any scalar. Then
()(1) (1)(1) (1) (1)
()()(0) (0)(0) (0) (0) ()()
()(1) (1)(1) (1) (1)
+−−+−−
ªºª ºªºªº
«»« »«»«»
+= + = + = + = +
«»« »«»«»
«»« »«»«»
++
¬¼¬ ¼¬¼¬¼
pq p q p q
pq pq p q p q p q
pq p q p q
TTT
((1)) (1)
()(1)
() ((0)) (0) ()
( )(0)
((1)) (1)
()(1)
c
c
Tc c c cT
c
c
c
⋅− −
⋅−
ªºªº
ªº
«»«»
«»
====
«»«»
«»
«»«»
«»
¬¼
¬¼¬¼
pp
p
pppp
p
pp
p
306 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
and T is a linear transformation.
c. Let
2
{1 }=,,Btt
and
123
{}=,,ee e! be the standard basis for
3
. Since
11 2 2
11
[( )] ( ) (1) 1 [( )] ( ) () 0,
11
ªº ª º
«» « »
===, = ==
«» « »
«» « »
¬¼ ¬ ¼
bb b b
TTT T TTt
!!
and
2
33
1
[( )] ( ) ( ) 0,
1
ªº
«»
===
«»
«»
¬¼
bb
TTTt
!
the matrix for T relative to
B
and
!
is
123
111
[( )] [( )] [( )] 1 0 0.
111
ªº
¬¼
ªº
«»
=
«»
«»
¬¼
bb b
TT T
!!!
10. a. Let p and q be polynomials in
3
, and let c be any scalar. Then
()(2) (2)(2)
()(3) (3)(3)
()
()(1) (1)(1)
()(0) (0)(0)
T
+−−+
ª
ºªº
«
»«»
++
«
»«»
+= =
«
»«»
++
«
»
«»
++
¬¼
¬
¼
pq p q
pq p q
pq pq p q
pq p q
(2) (2)
(3) (3) () ()
(1) (1)
(0) (0)
TT
−−
ªºªº
«»«»
«»«»
=+=+
«»«»
«»«»
¬¼¬¼
pq
pq pq
pq
pq
()(2) ((2)) (2)
()(3) ((3)) (3)
() ()
()(1) ((1)) (1)
()(0) ((0)) (0)
cc
cc
Tc c cT
cc
cc
⋅− ⋅ −
ªºªºªº
«»«»«»
⋅⋅
«»«»«»
====
«»«»«»
⋅⋅
«»«»«»
⋅⋅
¬¼¬¼¬¼
ppp
ppp
pp
ppp
ppp
and T is a linear transformation.
b. Let
23
{1 }=,, ,Bttt
and
1234
{}=,,,ee e e! be the standard basis for
4
. Since
2
11 2 2 3 3
124
139
[( )] ( ) (1) [( )] ( ) () [( )] ( ) ( ) ,
111
100
TTT T TTt TTTt
ªº ª º ª º
«» « » « »
«» « » « »
===, = ==, = ==
«» « » « »
«» « » « »
¬¼ ¬ ¼ ¬ ¼
bb b b bb
!! !
and
3
44
8
27
[( )] ( ) ( ) ,
1
0
TTTt
ªº
«»
«»
===
«»
«»
¬¼
bb
!
the matrix for T relative to
B
and
!
is
1234
1248
13927
[( )] [( )] [( )] [( )] .
1111
1000
TT TT
ªº
¬¼
−−
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
bb bb
!!!!
11. Following Example 4, if
12
11
,
21
P
ªº
«»
¬¼
−−
ª
º
==
«
»
¬
¼
bb then the B-matrix is
5.4 Solutions 307
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1
114111 22
216121 0 1
PAP
−−− −−
ªºªºªºªº
==
«»«»«»«»
−− −
¬¼¬¼¬¼¬¼
12. Following Example 4, if
12
01
,
12
P
ªº
«»
¬¼
ª
º
==
«
»
¬
¼
bb then the B-matrix is
1
21 6 20 1 4 0
10 4 01 2 2 2
PAP
−− − −
ªºª ºªºª º
==
«»« »«»« »
−−
¬¼¬ ¼¬¼¬ ¼
13. Start by diagonalizing A. The characteristic polynomial is
2
43( 1)( 3),+= −−
so the
eigenvalues of A are 1 and 3.
For λ = 1: 11
.
33
ªº
=«»
¬¼
AI The equation
()AI=x0
amounts to
12
0,xx+=
so
12
xx= with
2
x
free. A basis vector for the eigenspace is thus
1
1.
1
ª
º
=
«
»
¬
¼
v
For λ = 3: 31
3.
31
ªº
=«»
¬¼
AI The equation
(3)AI=x0
amounts to
12
30,xx+=
so
12
(1 3)xx=/
with
2
x free. A nice basis vector for the eigenspace is thus
2
1.
3
ª
º
=
«
»
¬
¼
v
From
1
v and
2
v we may construct
12
11
13
P
ªº
«»
¬¼
ª
º
==
«
»
¬
¼
vv which diagonalizes A. By Theorem 8, the
basis
12
{}B=,vv has the property that the B-matrix of the transformation
Axx6
is a diagonal
matrix.
14. Start by diagonalizing A. The characteristic polynomial is
2
45( 5)( 1),−−=+
so the
eigenvalues of A are 5 and
1.
For λ = 5: 33
5.
33
AI
ªº
=«»
¬¼
The equation
(5)AI=x0
amounts to
12
0,xx= so
12
xx= with
2
x free. A basis vector for the eigenspace is thus
1
1.
1
ª
º
=
«
»
¬
¼
v
For λ =
1
:33
.
33
AI ªº
+=
«»
¬¼
The equation
()AI+=x0
amounts to
12
0,xx+=
so
12
xx= with
2
x
free. A nice basis vector for the eigenspace is thus
2
1.
1
ª
º
=
«
»
¬
¼
v
From
1
v and
2
v we may construct
12
11
11
P
ªº
«»
¬¼
ª
º
==
«
»
¬
¼
vv which diagonalizes A. By Theorem 8,
the basis
12
{}B=,vv has the property that the B-matrix of the transformation
Axx6
is a diagonal
matrix.
308 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
15. Start by diagonalizing A. The characteristic polynomial is
2
310 ( +5)( 2),+=
so the
eigenvalues of A are 5 and 2.
For λ = 5: 62
5.
31
AI
ªº
+=
«»
¬¼
The equation
(5)AI+=x0
amounts to
12
30,xx+=
so
12
(1/3)xx=
with
2
x free. A basis vector for the eigenspace is thus
1
1.
3
ª
º
=
«
»
¬
¼
v
For λ = 2: 12
2.
36
AI
ªº
=«»
¬¼
The equation
(2)AI=x0
amounts to
12
20,xx= so
12
2xx=
with
2
x free. A basis vector for the eigenspace is thus
2
2.
1
ª
º
=
«
»
¬
¼
v
From
1
v and
2
v we may construct
12
12
31
P
ªº
«»
¬¼
ª
º
==
«
»
¬
¼
vv which diagonalizes A. By Theorem 8,
the basis
12
{}B=,vv has the property that the B-matrix of the transformation
Axx6
is a diagonal
matrix. Note that the solution in the text lists the vectors in the reverse order, which is also correct.
16. Start by diagonalizing A. The characteristic polynomial is
2
9+18 ( 3)( 6),=−−
so the
eigenvalues of A are 3 and 6.
For λ = 3: 12
3.
12
AI
ªº
=«»
¬¼
The equation
(3)AI=x0
amounts to
12
20,xx= so
12
2xx=
with
2
x free. A basis vector for the eigenspace is thus
1
2.
1
ª
º
=
«
»
¬
¼
v
For λ = 6: 22
6.
11
AI
−−
ªº
=«»
−−
¬¼
The equation
(6)AI=x0
amounts to
12
0,xx+=
so
12
xx= with
2
x free. A basis vector for the eigenspace is thus
2
1.
1
ª
º
=
«
»
¬
¼
v
From
1
v and
2
v we may construct
12
21
11
P
ªº
«»
¬¼
ª
º
==
«
»
¬
¼
vv which diagonalizes A. By Theorem 8,
the basis
12
{}B=,vv has the property that the B-matrix of the transformation
Axx6
is a diagonal
matrix.
17. a. We compute that
11
411 3 3
12 1 3
Aªºªºªº
===
«»«»«»
−−
¬¼¬¼¬¼
bb
so
1
b is an eigenvector of A corresponding to the eigenvalue 3. The characteristic polynomial of
A is
22
69( 3) ,+=
so 3 is the only eigenvalue for A. Now 11
3,
11
AI
ªº
=«»
−−
¬¼
which
implies that the eigenspace corresponding to the eigenvalue 3 is one-dimensional. Thus the matrix
A is not diagonalizable.
5.4 Solutions 309
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b. Following Example 4, if
12
,
ªº
«»
¬¼
=bbP
then the B-matrix for T is
1
21 4 1 1 1 31
11 1 2 1 2 0 3
PAP
ªºª ºª ºªº
==
«»« »« »«»
−−
¬¼¬ ¼¬ ¼¬¼
18. If there is a basis
B
such that []
B
T is diagonal, then A is similar to a diagonal matrix, by the second
paragraph following Example 3. In this case, A would have three linearly independent eigenvectors.
However, this is not necessarily the case, because A has only two distinct eigenvalues.
19. If A is similar to B, then there exists an invertible matrix P such that
1
.
=PAP B
Thus B is invertible
because it is the product of invertible matrices. By a theorem about inverses of products,
1111111
() ,
−−−−
==BPAP PAP
which shows that
1
A
is similar to
1
.
B
20. If
1
,
=APBP
then
211 11 121
()()() .
−− − −
===⋅⋅ =APBPPBP PBPPBPPBIBPPBP
So
2
A
is
similar to
2
.B
21. By hypothesis, there exist invertible P and Q such that
1
PBP A
=
and
1
.
=QCQ A
Then
11
.
−−
=PBP QCQ
Left-multiply by Q and right-multiply by
1
Q
to obtain
11 1 1
.
−− − −
=QP BPQ QQ CQQ
So
11 11 1
()(),
−− − −
==CQPBPQ PQ BPQ
which shows that B is similar to C.
22. If A is diagonalizable, then
1
APDP
=
for some P. Also, if B is similar to A, then
1
BQAQ
=
for some Q. Then
11 11 1
()()()()()BQPDP Q QPDPQ QPDQP
−− −
== =
So B is diagonalizable.
23. If
0,=,xxxA
then
11
.PA P
−−
=xx
If
1
,
=BPAP
then
1111 1
() ()BP P APP P A P
−−−− −
===xxxx
(*)
by the first calculation. Note that
1
0,
xP
because
0x
and
1
P
is invertible. Hence (*) shows
that
1
P
x
is an eigenvector of B corresponding to . (Of course, is an eigenvalue of both A and B
because the matrices are similar, by Theorem 4 in Section 5.2.)
24. If
1
,
=APBP
then
11
rank rank ( ) rank ,
−−
==APBP BP
by Supplementary Exercise 13 in Chapter
4. Also,
1
rank rank ,
=BP B
by Supplementary Exercise 14 in Chapter 4, since
1
P
is invertible.
Thus
rank rank .=AB
25. If
1
,
=APBP
then
11
1
tr( ) tr(( ) ) tr( ( )) By the trace property
tr( ) tr( ) tr( )
APBP PPB
PPB IB B
−−
==
===
If B is diagonal, then the diagonal entries of B must be the eigenvalues of A, by the Diagonalization
Theorem (Theorem 5 in Section 5.3). So
tr tr {sum of the eigenvalues of }.==AB A
26. If
1
APDP
=
for some P, then the general trace property from Exercise 25 shows that
1
tr tr [( ) ]APDP
==
1
tr [ ] tr .
=PPD D
(Or, one can use the result of Exercise 25 that since A is
310 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
similar to D,
tr tr .=AD
) Since the eigenvalues of A are on the main diagonal of D, tr D is the sum
of the eigenvalues of A.
27. For each
() .,=bb
jj
jI
Since the standard coordinate vector of any vector in
n
is just the vector
itself,
[( )] .=bb
jj
I
ε
Thus the matrix for I relative to
B
and the standard basis
!
is simply
12
.
ªº
«»
¬¼
bb b
n
This matrix is precisely the change-of-coordinates matrix
B
P defined in Section
4.4.
28. For each
() ,,=bb
jj
jI
and
[( )] [ ].=bb
jC jC
I
By formula (4), the matrix for I relative to the bases
B
and C is
12
[] [] [ ]
CnC
C
M…
ªº
«»
¬¼
=bb b
. In Theorem 15 of Section 4.7, this matrix was
denoted by
CB
P
and was called the change-of-coordinates matrix from
B
to
.C
29. If
1
{},=,,bb
n
B…
then the B-coordinate vector of
j
b
is
,e
j
the standard basis vector for
n
. For
instance,
11 2
10 0=+++bb b b
n
Thus
jjj
[( )] [ ] ,==bbe
BB
I
and
11
[] [( )] [( )] [ ]
BBnB n
II I I
ªº
¬¼
===bbee""
30. [M] If P is the matrix whose columns come from
,B
then the B-matrix of the transformation
Axx6
is
1
.
=DPAP
From the data in the text,
123
622 121
312 111
222 130
AP
ªº
«»
¬¼
−− −
ªº ªº
«» «»
=,= = ,
«» «»
«» «»
¬¼ ¬¼
bbb
331622121 210
1103 12111 030
211222130 014
D
−−
ªºªºªºªº
«»«»«»«»
=−−=
«»«»«»«»
«»«»«»«»
−−
¬¼¬¼¬¼¬¼
31. [M] If P is the matrix whose columns come from
,B
then the B-matrix of the transformation
Axx6
is
1
.
=DPAP
From the data in the text,
123
74816 323
114 6 111
34519 330
131374816323 726
13 0114 6111 046
011334519330 001
ªº
«»
¬¼
−− −
ªº ªº
«» «»
=,==,
«» «»
«» «»
−− −
¬¼ ¬¼
−−/−− − − −−−
ªºª ºªºªº
«»« »«»«»
==−−
«»« »«»«»
«»« »«»«»
−−/−− − −
¬¼¬ ¼¬¼¬¼
bbbAP
D
32. [M]
6409
3016
,
1210
4407
A
ªº
«»
«»
=«»
−−
«»
¬¼
5.4 Solutions 311
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
ev
=
eig(A)
=
(5, 1, -2, -2)
nulbasis(A-ev(1)*eye(4))
10000
05000
05000
10000
.
ª
º
«
»
.
«
»
=
«
»
.
«
»
.
¬
¼
A basis for the eigenspace of
5=
is
1
2
1.
1
2
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
b
nulbasis(A-ev(2)*eye(4))
1 0000
0 5000 .
3.5000
1.0000
.
ª
º
«
»
.
«
»
=
«
»
«
»
¬
¼
A basis for the eigenspace of
1=
is
2
2
1.
7
2
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
b
nulbasis(A-ev(3)*eye(4))
1 0000 1.5000
1 0000 0.7500
10000 0
0 1.0000
.
ª
º
«
»
.
«
»
=
«
»
.
«
»
¬
¼
A basis for the eigenspace of
2=
is
{}
34
16
13
,,.
10
04
½
ª
ºª º
°
°
«
»« »
°
°
«
»« »
=
®
¾
«
»« »
°
°
«
»« »
°
°
¬
¼¬ ¼
¯¿
bb
The basis
1234
{}B=,,,bb b b is a basis for
4
with the property that []
B
T is diagonal.
Note:
The Study Guide comments on Exercise 26 and tells students that the trace of any square matrix A
equals the sum of the eigenvalues of A, counted according to multiplicities. This provides a quick check
on the accuracy of an eigenvalue calculation. You could also refer students to the property of the
determinant described in Exercise 19 of Section 5.2.
312 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
5.5 SOLUTIONS _____________________________________________
1.12 1 2
13 13
AAI
λ
λ
λ
−−
ªº ª º
=,=
«» « »
¬¼ ¬ ¼
2
det( )(1 )(3 )(2) 4 5AI=−−=+
Use the quadratic formula to find the eigenvalues: 41620
2.
2
±
==±i
λ
Example 2 gives a
shortcut for finding one eigenvector, and Example 5 shows how to write the other eigenvector with
no effort.
For λ = 2 + i: 12
(2 ) .
11
−− −
ªº
+=
«»
¬¼
i
AiI
i The equation
( )AI=x0
gives
12
12
(1 ) 2 0
(1 ) 0
ix x
xix
−− − =
+=
As in Example 2, the two equations are equivalent—each determines the same relation between
1
x
and
2
.x So use the second equation to obtain
12
(1 ) ,=−−xix
with
2
x free. The general solution is
2
1,
1
+
ªº
«»
¬¼
i
x and the vector
1
1
1
i+
ªº
=«»
¬¼
v provides a basis for the eigenspace.
For λ = 2 – i: Let
1
2
1.
1
−−
ªº
==
«»
¬¼
vv
i
The remark prior to Example 5 shows that
2
v is
automatically an eigenvector for 2.+i In fact, calculations similar to those above would show that
2
{}v is a basis for the eigenspace. (In general, for a real matrix A, it can be shown that the set of
complex conjugates of the vectors in a basis of the eigenspace for
λ
is a basis of the eigenspace for
λ
.)
2. 33
.
33
A
ªº
=«»
¬¼
The characteristic polynomial is
2
618,+
so the eigenvalues of A are
63672
33.
2i
±
==±
For λ = 3 + 3i: 33
(3 3 ) .
33
i
AiI
i
−−
ªº
+=
«»
¬¼
The equation
((33))AiI+=x0
amounts to
12
0,xix= so
12
xix= with
2
x free. A basis vector for the eigenspace is thus
1
.
1
i
ªº
=«»
¬¼
v
For λ = 3 – 3i: A basis vector for the eigenspace is
1
2
.
1
i
ª
º
==
«
»
¬
¼
vv
5.5 Solutions 313
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
3. 51
.
81
Aªº
=«»
¬¼
The characteristic polynomial is
2
613,+
so the eigenvalues of A are
63652
32.
2i
±
==±
For λ = 3 + 2i: 22 1
(3 2 ) .
822
i
AiI
i
ªº
+=
«»
−−
¬¼
The equation
((32))AiI+=x0
amounts to
12
8(22) 0,xix+−− = so
12
1
4
i
xx
−−
=
with
2
x free. A nice basis vector for the eigenspace is thus
1
1.
4
i−−
ªº
=«»
¬¼
v
For λ = 3 – 2i: A basis vector for the eigenspace is
1
2
1.
4
i+
ª
º
==
«
»
¬
¼
vv
4. 12
.
13
A
ªº
=«»
¬¼
The characteristic polynomial is
2
45,+
so the eigenvalues of A are
44
2.
2i
±
==±
For λ = 2 + i: 12
(2 ) .
11
i
AiI
i
−− −
ªº
+=
«»
¬¼
The equation
((2))AiI+=x0
amounts to
12
(1 ) 0,xix+= so
12
(1 )xix=−− with
2
x free. A basis vector for the eigenspace is thus
1
1.
1
i+
ªº
=«»
¬¼
v
For λ = 2 – i: A basis vector for the eigenspace is
1
2
1.
1
i−−
ª
º
==
«
»
¬
¼
vv
5. 31
.
25
Aªº
=«»
¬¼
The characteristic polynomial is
2
817,+
so the eigenvalues of A are
84
4.
2i
±
==±
For λ = 4 + i: 11
(4 ) .
21
i
AiI
i
−−
ªº
+=
«»
−−
¬¼
The equation
((4))AiI+=x0
amounts to
12
2(1) 0,xix+= so
12
(1 )
2
i
xx
= with
1
x free. A basis vector for the eigenspace is thus
1
1.
2
i
ªº
=«»
¬¼
v
For λ = 4 – i: A basis vector for the eigenspace is
1
2
1.
2
i+
ª
º
==
«
»
¬
¼
vv
314 CHAPTER 5 Eigenvalues and Eigen
v
Copyright © 2012 Pea
r
6. 75
.
13
A
ªº
=«»
¬¼
The characteristic p
o
10 4 5.
2i
±
==±
For λ = 5 + i: 2
(5 ) 1
i
AiI
ª
+=
«
¬
12
(2 ) 0,xix+=
so
12
(2 )xix=+
1
2.
1
i+
ªº
=«»
¬¼
v
For λ = 5 – i: A basis vector for t
h
7.
31
.
13
ªº
«»
«»
«»
«»
¬¼
=A
From Example 6, t
h
Axx6
is
22
(3) 1
2
r=
||
=+=
xy-plane and use trigonometry:
ϕ
=
arctan
()ba/=
arctan
(1
/
Note:
Your students will want to kno
w
matrix of the form ab
ba
ªº
«»
¬¼
and simpl
y
the corresponding eigenvectors, 1
i
ªº
«»
¬¼
a
may have trouble keeping track of the c
o
8.
333
.
33 3
A
ªº
«»
«»
«»
«»
¬¼
=
The eigenvalu
e
is
22
(3) (3 3) 6.r=| |= + = Fr
o
(33/3) 3=−π /
radians.
v
ectors
r
son Education, Inc. Publishing as Addison-Wesley.
o
lynomial is
2
10 26,+
so the eigenvalues of A a
r
5.
2i
º
»
−−
¼ The equation
((5))AiI+=x0
amounts to
with
2
x free. A basis vector for the eigenspace is thu
s
h
e eigenspace is
1
2
2.
1
i
ªº
==
«»
¬¼
vv
h
e eigenvalues are
3.±i
The scale factor for the tra
n
2
. For the angle of rotation, plot the point
()(3ab,= ,
3) 6
=π/
radians.
w
whether you permit them on an exam to omit calc
u
y
write the eigenvalues
.±abi
A similar question ma
y
a
nd 1,
ªº
«»
¬¼
i which are announced in the Practice Probl
e
o
rrespondence between eigenvalues and eigenvectors.
e
s are
3(33).i±
The scale factor for the transformati
o
o
m trigonometry, the angle of rotation
ϕ
is arctan
(ba/
r
e
s
n
sformation
1)
in the
u
lations for a
y
arise about
e
m. Students
o
n
Axx6
)=
arctan
5.5 Solutions 315
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
9. 02
.
20
Aªº
=«»
¬¼
The eigenvalues are
2.i±
The scale factor for the transformation
Axx6
is
22
(0) (2) 2.r=
||
=+=
From trigonometry, the angle of rotation
ϕ
is arctan
()ba/=
arctan
() 2−∞ =−π /
radians.
10. 0.5
.
.5 0
Aªº
=«»
¬¼
The eigenvalues are
.5 .i±
The scale factor for the transformation
Axx6
is
22
(0) .5 .5.r=
||
=+=
From trigonometry, the angle of rotation
ϕ
is
arctan
()ba/=
arctan
() 2−∞ =−π/
radians.
11.
31
.
13
A
ªº
=«»
−−
«»
¬¼
The eigenvalues are
3.i±
The scale factor for the transformation
Axx6
is
22
(3) (1) 2.r=
||
=+= From trigonometry, the angle of rotation
ϕ
is
arctan
()ba/=
arctan
(1 3 ) 5 6/=−π/
radians.
12.
33
.
33
A
ªº
=«»
«»
¬¼
The eigenvalues are
3(3).i±
The scale factor for the transformation
Axx6
is
22
3(3) 23.r=| |= + = From trigonometry, the angle of rotation
ϕ
is
arctan
()ba/=
arctan
(3/3) /6
π
=
radians.
13. From Exercise 1,
2,i
and the eigenvector 1
1
i−−
ª
º
=
«
»
¬
¼
v corresponds to
2.=i
Since Re
1
1
ªº
=«»
¬¼
v and Im 1,
0
ª
º
=
«
»
¬
¼
v take 11
.
10
−−
ª
º
=
«
»
¬
¼
P Then compute
1
011211 0131 21
111310 1121 12
CPAP
−− − −
ªºªºªºªºªºªº
== = =
«»«»«»«»«»«»
−− −− −
¬¼¬¼¬¼¬¼¬¼¬¼
Actually, Theorem 9 gives the formula for C. Note that the eigenvector v corresponds to
abi
instead of
.+abi
If, for instance, you use the eigenvector for
2,+i
your C will be 21
.
12
ªº
«»
¬¼
Notes:
The Study Guide points out that the matrix C is described in Theorem 9 and the first column of C
is the real part of the eigenvector corresponding to
,abi
not
,+abi
as one might expect. Since students
may forget this, they are encouraged to compute C from the formula
1
,
=CPAP
as in the solution above.
The Study Guide also comments that because there are two possibilities for C in the factorization of a
22×
matrix as in Exercise 13, the measure of rotation of the angle associated with the transformation
Axx6
is determined only up to a change of sign. The “orientation” of the angle is determined by the
change of variable
.=xuP
See Figure 4 in the text.
316 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
14. 33
.
11
A
ªº
=«»
¬¼
The eigenvalues of A are
2(2),i
and the eigenvector 2i
i
ªº
+
=«»
¬¼
v corresponds
to
2(2).i=
By Theorem 9, 21
[Re Im ] 01
P
ª
º
==
«
»
¬
¼
vv and
1
33 2 2
2/2 2/2 2 1
11
01 01
22
CPAP
ª
º
ªºªº
−−
ªº
== =
«
»
«»«»
«»
¬¼
«
»
¬¼¬¼
¬
¼
There are two choices for whice
eigenvalues to use and many choices for the muliple of the eigenvector to use,. The solutions can
look quite different. For example, for
22i
λ
=+
an eigenvector is
21(2)
1
ii
ii
i
ªºª º
++
===
«»« »
¬¼¬ ¼
vw Using w in Theorem 9, results in
[Re Im ]P==ww
12
10
ª
º
«
»
¬
¼
and
1
01
33 2 2
12 .
11
2/2 2/2 1 0 22
CPAP
ª
º
ªºªº
ªº
== =
«
»
«»«»
«»
¬¼
«
»
¬¼¬¼
¬
¼
15. 05
.
22
Aªº
=«»
¬¼
The eigenvalues of A are
13,i
and the eigenvector 3
2
i
i
+
ªº
=«»
¬¼
v corresponds to
13.i=+
By Theorem 9,
[Re Im ]P==vv
31
02
ª
º
«
»
¬
¼and
1
210531 13
1.
032202 31
6
CPAP
ªºªºªºªº
== =
«»«»«»«»
−−
¬¼¬¼¬¼¬¼
There are many choices for the muliple of the
eigenvector to use, and the solutions can look quite different. For example,
313
22
ii
ii
i
+
ªºª º
===
«»« »
¬¼¬ ¼
vw
. Using w in Theorem 9, results in
[Re Im ]P==ww
13
20
ªº
«»
¬¼
and
1
03 051 3 13
1.
21 222 0 31
6
CPAP
ªºªºªºªº
== =
«»«»«»«»
−− −
¬¼¬¼¬¼¬¼
16. 42
.
16
A
ªº
=«»
¬¼
The eigenvalues of A are
5,i
and the eigenvector 1i
i
+
ª
º
=
«
»
¬
¼
v corresponds to
5.i=+
By Theorem 9,
[]
11
Re Im 01
P
ª
º
==
«
»
¬
¼
vv and
1
114211 51
011601 15
CPAP
ªºªºªºªº
== =
«»«»«»«»
−−
¬¼¬¼¬¼¬¼
. There are many choices for the muliple of the
eigenvector to use, and the solutions can look quite different. For example,
5.5 Solutions 317
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
11
1
ii
ii
i
++
ªº ª º
==−⋅ =−⋅
«» « »
¬¼ ¬ ¼
vw
. Using w in Theorem 9, results in
[Re Im ]P==ww
11
10
ªº
«»
¬¼
and
1
014 2 1 1 5 1
.
111 6 10 15
CPAP
−−
ªºª ºª ºª º
== =
«»« »« »« »
¬¼¬ ¼¬ ¼¬ ¼
17. 11 4 .
20 5
A
−−
ªº
=«»
¬¼
The eigenvalues of A are
34,i=±
and the eigenvector
12
5
i
i
+
ªº
=«»
¬¼
v corresponds to
34.i=+
By Theorem 9,
[]
12
Re Im 05
Pªº
==
«»
¬¼
vv and
1
5211412 34
1
0120505 43
5
CPAP
−− −
ªºª ºªºª º
== =
«»« »«»« »
−−
¬¼¬ ¼¬¼¬ ¼
. There are many choices for the muliple
of the eigenvector to use, and the solutions can look quite different. For example,
12 2
55
ii
ii
i
++
ªº ª º
==−⋅ =−⋅
«» « »
¬¼ ¬ ¼
vw
. Using w in Theorem 9, results in
[Re Im ]P==ww
21
50
ªº
«»
¬¼
and
1
01 11 4 21 3 4
1.
52 20 5 50 4 3
5
CPAP
−−− −
ªºª ºª ºª º
== =
«»« »« »« »
−−
¬¼¬ ¼¬ ¼¬ ¼
18. 35
.
25
A
ªº
=«»
¬¼
The eigenvalues of A are
43,i
and the eigenvector 3
2
i
i
+
ªº
=«»
¬¼
v corresponds to
43.i=+
By Theorem 9,
[]
31
Re Im 02
P
ª
º
==
«
»
¬
¼
vv and
1
213531 43
1
032502 34
6
CPAP
ªºªºªºªº
== =
«»«»«»«»
−−
¬¼¬¼¬¼¬¼
. There are many choices for the muliple of
the eigenvector to use, and the solutions can look quite different. For example,
313
22
ii
ii
i
++
ªº ª º
== = −⋅ =−⋅
«» « »
¬¼ ¬ ¼
vw
. Using w in Theorem 9, results in
[Re Im ]P==ww
13
20
ª
º
«
»
¬
¼ and
1
033 5 13 43
1.
212 5 20 34
6
CPAP
−−
ª
ºª ºª º ª º
== =
«
»« »« » « »
¬
¼¬ ¼¬ ¼ ¬ ¼
19. 152 7 .
56 4
..
ªº
=«»
..
¬¼
A The characteristic polynomial is
2
192 1,.+
so the eigenvalues of A are
96 28 .=. ±. i
To find an eigenvector corresponding to
96 28 ,..i
we compute
56 28 7
(96 28) 56 56 28
i
AiI
i
.+. .
ªº
..=
«»
..+.
¬¼
318 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
The equation
((9628))AiI..=x0
amounts to
12
56 ( 56 28 ) 0,.+.+. =xix
so
12
((2 ) 2)xix=/
with
2
x free. A nice eigenvector corresponding to
96 28i..
is thus 2.
2
ª
º
=
«
»
¬
¼
vi By Theorem 9,
[]
21
Re Im 20
P
ª
º
==
«
»
¬
¼
vv and
1
01152 72 1 96 28
1
22 56 42 0 28 96
2
CPAP
....
ª
ºª ºª º ª º
== =
«
»« »« » « »
.. ..
¬
¼¬ ¼¬ ¼ ¬ ¼
20. 38
.
45
A
−−
ªº
=«»
¬¼
The eigenvalues of A are
14,i
and the eigenvector 1i
i
+
ªº
=«»
¬¼
v corresponds to
14.i=+
By Theorem 9,By Theorem 9,
[]
11
Re Im 01
P
ª
º
==
«
»
¬
¼
vv and
1
113811 14
014501 41
CPAP
−−
ªºª ºªºªº
== =
«»« »«»«»
−−
¬¼¬ ¼¬¼¬¼
. There are many choices for the muliple
of the eigenvector to use, and the solutions can look quite different. For example,
11
1
ii
ii
i
++
ªº ª º
==−⋅ =−⋅
«» « »
¬¼ ¬ ¼
vw
. Using w in Theorem 9, results in
[Re Im ]P==ww
11
10
ª
º
«
»
¬
¼ and
1
01 3 8 11 14
.
11 4 5 10 4 1
CPAP
−−
ª
ºª ºª º ª º
== =
«
»« »« » « »
¬
¼¬ ¼¬ ¼ ¬ ¼
21. The first equation in (2) is
12
(3 6) 6 0..+. .=ix x We solve this for
2
x to find that
211
(( 3 6 ) 6) (( 1 2 ) 2) .=.+. /. = +/xixix
Letting
1
2,=x we find that 2
12i
ª
º
=
«
»
+
¬
¼
y is an eigenvector
for the matrix A. Since
1
224
12 12
12 5
55
i
ii
i
−−
ªº ªº
++
== =
«» «»
+
¬¼ ¬¼
yv
the vector y is a complex
multiple of the vector
1
v used in Example 2.
22. Since
() () ()()===,xxxxxAA
µµ µ µµ
is an eigenvector of A.
23. (a) properties of conjugates and the fact that
TT
=xx
(b)
=xxAA
and A is real
(c)
T
Axx
is a scalar and hence may be viewed as a
11×
matrix
(d) properties of transposes
(e)
T
AA
=
and the definition of q
24.
()
TT T
A=λ=λ⋅xxx x xx
because x is an eigenvector. It is easy to see that
T
xx
is real (and positive)
because zz is nonnegative for every complex number z. Since
T
Axx
is real, by Exercise 23, so is
.
Next, write
,=+xu vi
where u and v are real vectors. Then
() andAA i AiA i=+=+ =+xuvuv xuv
The real part of Ax is Au because the entries in A, u, and v are all real. The real part of
x
is
u
because and the entries in u and v are real. Since Ax and
x
are equal, their real parts are equal,
5.5 Solutions 319
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
too. (Apply the corresponding statement about complex numbers to each entry of Ax.) Thus
,=uuA
which shows that the real part of x is an eigenvector of A.
25. Write
Re (Im ),=+xx xi
so that
(Re ) (Im ).=+xx xAA iA
Since A is real, so are
(Re )Ax
and
(Im ).xA
Thus
(Re )Ax
is the real part of Ax and
(Im )Ax
is the imaginary part of Ax.
26. a. If
,=abi
then
Re Im
()(Re Im )
( Re Im ) ( Im Re )
Av Av
Aabii
ab iab
==+
=++
vv v v
vv vv
   
By Exercise 25,
(Re ) Re Re Im
(Im ) Im Re Im
AAab
AAba
== +
==+
vvvv
vv vv
b. Let
[]
Re Im .=vvP
By (a),
(Re ) (Im )
ab
APAP
ba
ªº ª º
=, =
«» « »
¬¼ ¬ ¼
vv
So
[]
(Re ) (Im )=
ªº
−−
ªº ª º ª º
===
«»
«» « » « »
¬¼ ¬ ¼ ¬ ¼
¬¼
vvAP A A
ab ab
PP P PC
ba ba
27.
26 33 23 20
68113
[] 14 19 16 3
20 20 20 14
A
ªº
«»
−−
«»
=«»
−−−
«»
−−−−
¬¼
M
The MATLAB command [V D] = eig(A) returns
V = -0.5709 - 0.4172i -0.5709 + 0.4172i -0.3708 + 0.0732i -0.3708 - 0.0732i
0.4941 - 0.0769i 0.4941 + 0.0769i 0.4440 + 0.2976i 0.4440 - 0.2976i
0.0769 + 0.4941i 0.0769 - 0.4941i -0.4440 - 0.2976i -0.4440 + 0.2976i
-0.0000 - 0.0000i -0.0000 + 0.0000i 0.2976 - 0.4440i 0.2976 + 0.4440i
D = -2.0000 + 5.0000i 0 0 0
0 -2.0000 - 5.0000i 0 0
0 0 -4.0000 +10.0000i 0
0 0 0 -4.0000 -10.0000i
The eigenvalues of A are the elements listed on the diagonal of D. The corresponding
eigenvectors are listed in the corresponding columns of V. To get a nice eigenvector for
25i
λ
=+
take v
1
=V(1:4,1)/V(3,1), resulting in
1
1
1
0
i
i
+
ª
º
«
»
«
»
=
«
»
«
»
¬
¼
v
For
410,iλ=+
take v
2
=V(1:4,3)/V(4,3), and then multiply the vector by 2 resulting in
320 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
v
2
=
1
2
2
2
i
i
i
−−
ªº
«»
«»
«»
«»
¬¼
.
Hence by Theorem 9,
1122
1111
0102
Re Im Re Im 100 2
0020
P
ªº
«»
¬¼
−−
ª
º
«
»
«
»
==
«
»
«
»
¬
¼
vvvv
and
25 00
52 00
.
00 410
00104
C
ªº
«»
−−
«»
=«»
«»
−−
¬¼
Other choices are possible, but C must equal
1
.
PAP
28.
7112017
20 40 86 74
[] 051010
10 28 60 53
A
ªº
«»
−−−−
«»
=«»
−− −
«»
¬¼
M
The matlab command [V D] = eig(A) returns
V= -0.2132 + 0.2132i -0.2132 - 0.2132i 0.1085 - 0.3254i 0.1085 + 0.3254i
0 - 0.8528i 0 + 0.8528i 0.3254 + 0.1085i 0.3254 - 0.1085i
0 0 0 - 0.5423i 0 + 0.5423i
0 + 0.4264i 0 - 0.4264i -0.2169 + 0.6508i -0.2169 - 0.6508i
D = 2.0000 + 5.0000i 0 0 0
0 2.0000 - 5.0000i 0 0
0 0 3.0000 + 1.0000i 0
0 0 0 3.0000 - 1.0000
The eigenvalues of A are the elements listed on the diagonal of D. The corresponding
eigenvectors are listed in the corresponding columns of V. To get a nice eigenvector for
25i
λ
=+
take v
1
=V(1:4,1)/V(4,1), and then multiply the vector by 2 resulting in
1
1
4
0
2
i+
ªº
«»
«»
=«»
«»
¬¼
v
For
3,i=+
take v
2
=V(1:4,3)/V(4,3), and then multiply the vector by 4 resulting in
2
2
2
3
4
i
i
ªº
«»
«»
=«»
+
«»
¬¼
v
5.6 Solutions 321
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Hence by Theorem 9,
1122
11 2 0
40 0 2
Re Im Re Im 00 3 1
20 4 0
P
ªº
«»
¬¼
ª
º
«
»
−−
«
»
==
«
»
«
»
¬
¼
vvvv
and
25 00
52 00
.
00 31
00 13
C
ªº
«»
«»
=«»
«»
¬¼
Other choices are possible, but C must equal
1
.
PAP
5.6 SOLUTIONS
1. The exercise does not specify the matrix A, but only lists the eigenvalues 3 and 1/3, and the
corresponding eigenvectors
1
1
1
ªº
=«»
¬¼
v and
2
1.
1
ª
º
=
«
»
¬
¼
v Also,
0
9.
1
ª
º
=
«
»
¬
¼
x
a. To find the action of A on
0
,x express
0
x in terms of
1
v and
2
.v That is, find
1
c and
2
c such
that
01122
.=+xvvcc
This is certainly possible because the eigenvectors
1
v and
2
v are linearly
independent (by inspection and also because they correspond to distinct eigenvalues) and hence
form a basis for
2
.R (Two linearly independent vectors in
2
R
automatically span
2
.R) The row
reduction
120
119 105
111 014
ªº
«»
¬¼
ªºª º
=«»« »
¬¼¬ ¼
vvx shows that
012
54.=xvv
Since
1
v and
2
v are eigenvectors (for the eigenvalues 3 and 1/3):
10 1 2 1 2
15 4 3 49 3
54 534(13)15 4 3 41 3
AAA
//
ª
ºª ºª º
== =⋅−/==
«
»« »« »
//
¬
¼¬ ¼¬ ¼
xx v v v v
b. Each time A acts on a linear combination of
1
v and
2
,v the
1
v term is multiplied by the
eigenvalue 3 and the
2
v term is multiplied by the eigenvalue 1/3:
22
21 1 2 1 2
[5 3 4(1 3) ] 5(3) 4(1 3)AA==⋅−/= /xx v v v v
In general,
12
5(3) 4(1 3) ,=/xv v
kk
k
for
0.k
2. The vectors
123
123
013
357
ªº ªº ªº
«» «» «»
=,=,=
«» «» «»
«» «» «»
−−
¬¼ ¬¼ ¬¼
vv v
are eigenvectors of a
33×
matrix A, corresponding to
eigenvalues 3, 4/5, and 3/5, respectively. Also,
0
2
5.
3
ª
º
«
»
=
«
»
«
»
¬
¼
x
To describe the solution of the equation
1
(12),
+
==,,xx
kk
Ak …
first write
0
x in terms of the eigenvectors.
10123
230
1232 1002
0135 0101 2 2
3573 0002
ªº
«»
¬¼
−−
ªºªº
«»«»
=−− =++
«»«»
«»«»
−−
¬¼¬¼
vvvx x vv v
322 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Then,
11231231 2 3
(2 2 ) 2 2 2 3 (4 5) 2 (3 5) .AAAA=++=++=+/ +/xvvvvvvv v v
In general,
12 3
23 (45) 2(35) .
kk k
k
=+/ +/xv v v
For all k sufficiently large,
1
1
23 23 0
3
kk
k
ªº
«»
≈⋅ =«»
«»
¬¼
xv
3.
2
54
det( ) ( 5 )(1 1 ) 08 1 6 63.
211
..
ªº
=,=. .+. = .+.
«»
..
¬¼
AAI
λλλλλ
This characteristic polynomial
factors as
(9)(7),..
λλ
so the eigenvalues are .9 and .7. If
1
v and
2
v denote corresponding
eigenvectors, and if
01122
,=+xvvcc
then
1112211221122
() (9)(7)Ac c cA c A c c=+=+=.+.xvv vv v v
and for
1,
k
112 2
(9) (7)
kk
k
cc=. + .xvv
For any choices of
1
c and
2
,c both the owl and wood rat populations decline over time.
4.
2
54
det( ) ( 5 )(1 1 ) ( 4)( 125) 1 6 6.
125 1 1
..
ªº
=,=. .−−.. = .+.
«»
..
¬¼
AAI
λλλ λλ
This characteristic
polynomial factors as
(1)( 6),−−.
λλ
so the eigenvalues are 1 and .6. For the eigenvalue 1, solve
540 540
()0 .
125 1 0 0 0 0
..
ªºªº
=:
«»«»
..
¬¼¬¼
xAI A basis for the eigenspace is
1
4.
5
ªº
=«»
¬¼
v Let
2
v be an
eigenvector for the eigenvalue .6. (The entries in
2
v are not important for the long-term behavior of
the system.) If
01122
,=+xvvcc
then
11122 112 2
(6) ,=+ =+.xv vv vcA c A c c and for k sufficiently
large,
1221
44
(6)
55
k
k
cc c
ªº ªº
=+.
«» «»
¬¼ ¬¼
xv
Provided that
1
0,c the owl and wood rat populations each stabilize in size, and eventually the
populations are in the ratio of 4 owls for each 5 thousand rats. If some aspect of the model were to
change slightly, the characteristic equation would change slightly and the perturbed matrix A might
not have 1 as an eigenvalue. If the eigenvalue becomes slightly larger than 1, the two populations
will grow; if the eigenvalue becomes slightly less than 1, both populations will decline.
5.
2
43
det( ) 1 6 5775.
325 1 2
..
ªº
=,=.+.
«»
..
¬¼
AAI
λλ λ
The quadratic formula provides the roots of the
characteristic equation:
2
16 16 4(5775) 16 25 105 and 55
22
λ
. ..± .
===..
Because one eigenvalue is larger than one, both populations grow in size. Their relative sizes are
determined eventually by the entries in the eigenvector corresponding to 1.05. Solve
(105) :.=x0AI
1
65 3 0 13 6 0 6
An eigenvector is
325 15 0 0 0 0 13
..
ªºªº ªº
.=.
«»«» «»
..
¬¼¬¼ ¬¼
v
5.6 Solutions 323
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Eventually, there will be about 6 spotted owls for every 13 (thousand) flying squirrels.
6. When 43
5,
512
..
ªº
=. , =«»
..
¬¼
pA and
2
det( ) 1 6 63 ( 9)( 7).=.+.=..AI
λλ λ λ λ
The eigenvalues of A are .9 and .7, both less than 1 in magnitude. The origin is an attractor for the
dynamical system and each trajectory tends toward 0. So both populations of owls and squirrels
eventually perish.
The calculations in Exercise 4 (as well as those in Exercises 27 and Exercise 33 in Section 5.2) show
that if the largest eigenvalue of A is 1, then in most cases the population vector
k
x will tend toward a
multiple of the eigenvector corresponding to the eigenvalue 1. [If
1
v and
2
v are eigenvectors, with
1
v corresponding to
1,=
λ
and if
01122
,=+xvvcc
then
k
x tends toward
11
,vc provided
1
c is not
zero.] So the problem here is to determine the value of the predation parameter p such that the largest
eigenvalue of A is 1. Compute the characteristic polynomial:
2
43
det ( 4 )(1 2 ) 3 1 6 ( 48 3 )
12 pp
p
λλλ λλ
λ
..
ªº
=. .+. = .+.+.
«»
.
¬¼
By the quadratic formula,
2
16 16 4(48 3 )
2
p
λ
. .+.
=
The larger eigenvalue is 1 when
2
16 16 4(48 3 ) 2 and 256 192 12 4pp.+ . .+. = ...=.
In this case,
64 1 2 16,..=.p
and
4.p=.
7. a. The matrix A in Exercise 1 has eigenvalues 3 and 1/3. Since
31
||
>
and
13 1,
|
/
|
<
the origin is a
saddle point.
b. The direction of greatest attraction is determined by
2
1,
1
ª
º
=
«
»
¬
¼
v the eigenvector corresponding to
the eigenvalue with absolute value less than 1. The direction of greatest repulsion is determined
by
1
1,
1
ªº
=«»
¬¼
v the eigenvector corresponding to the eigenvalue greater than 1.
c. The drawing below shows: (1) lines through the eigenvectors and the origin, (2) arrows toward
the origin (showing attraction) on the line through
2
v and arrows away from the origin (showing
repulsion) on the line through
1
,v (3) several typical trajectories (with arrows) that show the
general flow of points. No specific points other than
1
v and
2
v were computed. This type of
324 CHAPTER 5 Eigenvalues and Eigen
v
Copyright © 2012 Pea
r
drawing is about all that one ca
n
Note:
If you wish your class to sketch
t
beyond the discussion in the text. The f
o
Sketching trajectories for a dynami
c
difficult than the sketch in Exercise
7
trajectories “bend” as they move towar
d
Section 5.6 through a quarter-turn and
r
figure corresponds to the matrix A with
diagonal matrix, with positive diagonal
or on curves whose equations have th
e
initial point
0
.x (See Encounters with C
h
150.)
8. The matrix from Exercise 2 has eig
e
and the others are less than one in
m
repulsion is the line through the ori
g
direction of greatest attraction is th
e
smallest eigenvalue 3/5.
9.
2
17 3 det( )
12 8
AAI
..
ªº
=,−λ =
«»
..
¬¼
2
25 25 4(1) 25
2
.
==
The origin is a saddle point becaus
e
than 1 in magnitude. The direction
o
found below. Solve (2)AI
ª
=:
«
¬
x0
1
1.
1
ªº
=«»
¬¼
v The direction of greates
v
ectors
r
son Education, Inc. Publishing as Addison-Wesley.
n
make without using a computer to plot points.
t
rajectories for anything except saddle points, you wi
l
o
llowing remarks from the Study Guide are relevant.
c
al system in which the origin is an attractor or a rep
e
7
. There has been no discussion of the direction i
n
d
or away from the origin. For instance, if you rotat
e
r
elabel the axes so that
1
x is on the horizontal axis,
t
the diagonal entries .8 and .64 interchanged. In gen
e
entries a and d, unequal to 1, then the trajectories li
e
e
form
21
(),=
s
xrx
where
(ln ) (ln )sda=/
and r de
p
Ch
aos, by Denny Gulick, New York: McGraw-Hill, 19
e
nvalues 3, 4/5, and 3/5. Since one eigenvalue is grea
t
m
agnitude, the origin is a saddle point. The direction
o
g
in and the eigenvector
(1 0 3),,
for the eigenvalue 3.
e
line through the origin and the eigenvector
(3 37
)
,,
25 10
.+=
225 25 15 2 and 5
22
..±.
==.
e
one eigenvalue is greater than 1 and the other eigenv
o
f greatest repulsion is through the origin and the eig
e
330110
,
12 12 0 0 0 0
..
ª
ºª º
»« »
..
¬
¼¬ ¼
so x
1
= –x
2
, and x
2
i
s
t attraction is through the origin and the eigenvector
v
l
l need to go
e
llor is more
n
which the
e
Figure 1 of
t
hen the new
e
ral, if A is a
e
on the axes
p
ends on the
92, pp. 147–
t
er than 1
o
f greatest
The
)
for the
alue is less
e
nvector
1
v
s
free. Take
2
v
found
5.6 Solutions 325
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
below. Solve 12 3 0 1 25 0
(5) ,
12 3 0 0 0 0
...
ªºªº
.=:
«»«»
..
¬¼¬¼
x0 AI so
12
25 ,xx=. and
2
x is free. Take
2
1.
4
ªº
=«»
¬¼
v
10.
2
34
det( )14 45 0
311
AAI
..
ªº
=,=.+.=
«»
..
¬¼
2
14 14 4(45) 14 16 14 4 5 and 9
222
. ..± . .
====..
The origin is an attractor because both eigenvalues are less than 1 in magnitude. The direction of
greatest attraction is through the origin and the eigenvector
1
v found below. Solve
240 1 20
(5) ,
360 0 00
..
ªºªº
.=:
«»«»
..
¬¼¬¼
x0 AI so
12
2,=xx
and
2
x is free. Take
1
2.
1
ªº
=«»
¬¼
v
11.
2
45
det( )17 72 0
413
AAI
..
ªº
=,=.+.=
«»
..
¬¼
2
17 17 4(72) 17 01 17 1 8 and 9
222
. ..± . .±.
====..
The origin is an attractor because both eigenvalues are less than 1 in magnitude. The direction of
greatest attraction is through the origin and the eigenvector
1
v found below. Solve
450 1 1250
(8) ,
450 0 00
.. .
ªºª º
.=:
«»« »
..
¬¼¬ ¼
x0 AI so
12
125 ,=.xx
and
2
x is free. Take
1
5.
4
ªº
=«»
¬¼
v
12.
2
56
det( )19 88 0
314
AAI
..
ªº
=,=.+.=
«»
..
¬¼
2
19 19 4(88) 19 09 19 3 8 and 1 1
222
. ..± . .
====..
The origin is a saddle point because one eigenvalue is greater than 1 and the other eigenvalue is less
than 1 in magnitude. The direction of greatest repulsion is through the origin and the eigenvector
1
v
found below. Solve 660 1 10
(11) ,
330 0 00
..
ªºªº
.=:
«»«»
..
¬¼¬¼
x0 AI so
12
,=xx
and
2
x is free. Take
1
1.
1
ªº
=«»
¬¼
v The direction of greatest attraction is through the origin and the eigenvector
2
v found
below. Solve 360 1 20
(8) ,
360 0 00
..
ªºªº
.=:
«»«»
..
¬¼¬¼
x0 AI so
12
2,=xx
and
2
x is free. Take
2
2.
1
ªº
=«»
¬¼
v
326 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
13.
2
83
det( )23 132 0
415
AAI
..
ªº
=,=.+.=
«»
..
¬¼
2
23 23 4(132) 23 01 23 1 11 and 12
222
. ..± . .
====..
The origin is a repellor because both eigenvalues are greater than 1 in magnitude. The direction of
greatest repulsion is through the origin and the eigenvector
1
v found below. Solve
430 1 750
(12) ,
430 0 00
.. .
ªºª º
.=:
«»« »
..
¬¼¬ ¼
x0 AI so
12
75 ,=.xx
and
2
x is free. Take
1
3.
4
ªº
=«»
¬¼
v
14.
2
17 6 det( )24 143 0
47
AAI
..
ªº
=,=.+.=
«»
..
¬¼
2
24 24 4(143) 24 04 24 2 11 and 13
222
. ..± . .
====..
The origin is a repellor because both eigenvalues are greater than 1 in magnitude. The direction of
greatest repulsion is through the origin and the eigenvector
1
v found below. Solve
4601150
(13) ,
460000
.. .
ªºªº
.=:
«»«»
..
¬¼¬¼
x0 AI so
12
15 ,=.xx
and
2
x is free. Take
1
3.
2
ªº
=«»
¬¼
v
15.
40 2
383.
325
..
ªº
«»
=. . .
«»
«»
...
¬¼
A
Given eigenvector
1
1
6
3
.
ª
º
«
»
=.
«
»
«
»
.
¬
¼
v
and eigenvalues .5 and .2. To find the eigenvalue for
1
,v compute
111
40 21 1
3836 61 Thusisaneigenvectorfor 1
3253 3
A
λ
....
ªºªºªº
«»«»«»
=. . . . =. ==.
«»«»«»
«»«»«»
.... .
¬¼¬¼¬¼
vvv
13
23 2
3
1020 10 20 2 2
For 5 3 3 3 0 0 1 3 0 3 . Set 3
3200 00 00 isfree 1
.. =
ªºªº ªº
«»«» «»
=. : . . . , ==.
«»«» «»
«»«» «»
..
¬¼¬¼ ¬¼
v
xx
xx
x
λ
13
23
3
2020 1010 1
For 2 3 6 3 0 0 1 0 0 , 0 . Set 0
3230 0000 is free 1
.. =−−
ªºªº ªº
«»«» «»
=. : . . . = =
«»«» «»
«»«» «»
...
¬¼¬¼ ¬¼
v
xx
x
x
λ
Given
0
(0 3 7),=,.,.x find weights such that
011 233
.=++xvvvccc
1230
12 10 1001
6303 0101.
3117 0003
ªº
«»
¬¼
.
ªºªº
«»«»
=. ..
«»«»
«»«»
.. .
¬¼¬¼
vvvx
5.6 Solutions 327
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
01 2 3
11 2 31 2 3
12 3 1
13
1 3 1( 5) 3( 2) and
1( 5) 3( 2) As increases approaches
kk
kk
AA A
k
=+. +.
=+. +. =+..+..,
=+.. +.. . , .
xv v v
xv v vv v v
xv v v x v
16. [M]
90 01 09 1 0000
01 90 01 0 8900 To four decimal places
09 09 90 8100
09192 9199
01919 Exact 1999
10000 1
... .
ªº ªº
«» «»
=. . . =. . ,
«» «»
«» «»
... .
¬¼ ¬¼
./
ªº ªº
«» «»
.. :/
«» «»
«» «»
.
¬¼ ¬¼
1
2
ev = eig(A)
v=nulbasis(A-eye(3))=
v=nulbasis(A-ev(2)
A
1
1
0
1
0
1
ªº
«»
«»
«»
¬¼
ªº
«»
«»
«»
¬¼
3
*eye(3))=
v=nulbasis(A-ev(3)*eye(3))=
The general solution of the dynamical system is
11 2 2 3 3
(89) (81) .=+. +.xv v v
kk
k
cc c
Note:
When working with stochastic matrices and starting with a probability vector (having nonnegative
entries whose sum is 1), it helps to scale
1
v to make its entries sum to 1. If
1
(91 209 19 209 99 209),=/,/,/v or
(435 091 474).,.,.
to three decimal places, then the weight
1
c above
turns out to be 1. See the text’s discussion of Exercise 27 in Section 5.2.
17. a. 016
38
A.
ªº
=«»
..
¬¼
b.
2
16
det 8 48 0.
38
.
ªº
=..=
«»
..
¬¼
λλλ
λ
The eigenvalues of A are given by
2
8(8)4(48)
8256816
12 and 4
222
λ
.−−..± . .
====..
The numbers of juveniles and adults are increasing because the largest eigenvalue is greater than
1. The eventual growth rate of each age class is 1.2, which is 20% per year.
To find the eventual relative population sizes, solve
(12) :.=x0AI
12
1
2
(4 3)
12 16 0 1 43 0 4
Set
is free
340000 3
xx
x
=/
.. /
ªºªº ªº
..=.
«»«» «»
..
¬¼¬¼ ¬¼
v
Eventually, there will be about 4 juveniles for every 3 adults.
328 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
c. [M] Suppose that the initial populations are given by
0
(15 10).=,x The Study Guide describes
how to generate the trajectory for as many years as desired and then to plot the values for each
population. Let (j a ).=,x
kkk
Then we need to plot the sequences {j } {a } {j a },,,+
kkkk
and
{j a }./
kk
Adjacent points in a sequence can be connected with a line segment. When a sequence is
plotted, the resulting graph can be captured on the screen and printed (if done on a computer) or
copied by hand onto paper (if working with a graphics calculator).
18. a.
0042
60 0
07595
A
.
ªº
«»
=.
«»
«»
..
¬¼
b.
00774 04063
00774 04063
11048
i
i
.+.
ªº
«»
..
«»
«»
.
¬¼
ev = eig(A)=
The long-term growth rate is 1.105, about 10.5 % per year.
03801
0 2064
10000
.
ª
º
«
»
=.
«
»
«
»
.
¬
¼
v=nulbasis(A-ev(3)*eye(3))
For each 100 adults, there will be approximately 38 calves and 21 yearlings.
Note:
The MATLAB box in the Study Guide and the various technology appendices all give directions
for generating the sequence of points in a trajectory of a dynamical system. Details for producing a
graphical representation of a trajectory are also given, with several options available in MATLAB, Maple,
and Mathematica.
5.7 SOLUTIONS
1. From the “eigendata” (eigenvalues and corresponding eigenvectors) given, the eigenfunctions for the
differential equation
A=xx
are
4
1
t
ev
and
2
2
.v
t
e
The general solution of
A=xx
has the form
42
12
31
11
tt
cece
−−
ªº ªº
+
«» «»
¬¼ ¬¼
The initial condition 6
(0) 1
ªº
=«»
¬¼
x determines
1
c and
2
:c
4(0) 2(0)
12
316
111
316 1052
111 0132
−−
ªº ªº ªº
+=
«» «» «»
¬¼ ¬¼ ¬¼
−−/
ªºªº
«»«»
/
¬¼¬¼
cece
Thus
12
52 32,=/, =/cc and
42
31
53
() .
11
22
−−
ª
ºªº
=
«
»«»
¬
¼¬¼
x
tt
te e
5.7 Solutions 329
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
2. From the eigendata given, the eigenfunctions for the differential equation
A=xx
are
3
1
t
e
v
and
1
2.
v
t
e
The general solution of
A=xx
has the form
31
12
11
11
tt
cece
−−
ªº ªº
+
«» «»
¬¼ ¬¼
The initial condition 2
(0) 3
ªº
=«»
¬¼
x determines
1
c and
2
c:
3(0) 1(0)
12
112
113
112 1 0 12
11 3 0 1 52
ce ce
−−
ªº ªº ªº
+=
«» «» «»
¬¼ ¬¼ ¬¼
/
ªºª º
«»« »
/
¬¼¬ ¼
Thus
12
12 52,=/, = /cc and
3
11
15
() .
11
22
−−
ªº ªº
=+
«» «»
¬¼ ¬¼
x
tt
tee
3.
2
23
det( )1( 1)( 1) 0.
12
ªº
=,==+=
«»
−−
¬¼
AAI Eigenvalues: 1 and
1.
For λ = 1: 130 130
,
130 000
ªºªº
«»«»
−−
¬¼¬¼
so
12
3xx= with
2
x free. Take
2
1x= and
1
3.
1
ªº
=«»
¬¼
v
For λ = –1: 330 110
,
110 000
ªºªº
«»«»
−−
¬¼¬¼
so
12
xx= with
2
x free. Take
2
1x= and
2
1.
1
ªº
=«»
¬¼
v
For the initial condition 3
(0) ,
2
ªº
=«»
¬¼
x find
1
c and
2
c such that
11 2 2
(0) :+=vvxcc
12
313 1052
(0) 112 0192
−− −/
ªºª º
=
ªº
«»« »
¬¼ /
¬¼¬ ¼
vvx
Thus
12
52 92,=/, = /cc
and 31
59
() .
22
11
−−
ªº ªº
=+
«» «»
¬¼ ¬¼
x
tt
tee
Since one eigenvalue is positive and the other is negative, the origin is a saddle point of the
dynamical system described by
.=xxA
The direction of greatest attraction is the line through
2
v
and the origin. The direction of greatest repulsion is the line through
1
v and the origin.
4.
2
25
det( ) 2 3( 1)( 3) 0.
14
AAI
−−
ªº
=,=−−=+ =
«»
¬¼ Eigenvalues:
1
and 3.
For λ = 3: 550 110
,
110 000
−−
ªºªº
«»«»
¬¼¬¼
so
12
xx= with
2
x free. Take
2
1x= and
1
1.
1
ªº
=«»
¬¼
v
For λ = –1: 150 150
,
150 000
−−
ªºªº
«»«»
¬¼¬¼
so
12
5xx= with
2
x free. Take
2
1x= and
2
5.
1
ªº
=«»
¬¼
v
330 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
For the initial condition 3
(0) ,
2
ªº
=«»
¬¼
x find
1
c and
2
c such that
11 2 2
(0)cc+=vvx
:
12
153 10134
(0) 112 0154
−− /
ªºª º
=
ªº
«»« »
¬¼ /
¬¼¬ ¼
vvx
Thus
12
13 4 5 4,=/,=/cc and
3
15
13 5
() .
11
44
−−
ªº ªº
=
«» «»
¬¼ ¬¼
x
tt
tee
Since one eigenvalue is positive and the other is negative, the origin is a saddle point of the
dynamical system described by
.=xxA
The direction of greatest attraction is the line through
2
v
and the origin. The direction of greatest repulsion is the line through
1
v and the origin.
5. 71
,
33
ªº
=«»
¬¼
A det
2
( ) 10 24 ( 4)( 6) 0.=+=−−=AI
Eigenvalues: 4 and 6.
For λ = 4: 310 1130
,
310 0 00
−−/
ªºª º
«»« »
¬¼¬ ¼
so
12
(1 3)xx=/ with
2
x free. Take
2
3x= and
1
1.
3
ª
º
=
«
»
¬
¼
v
For λ = 6: 110 110
,
330 000
−−
ªºªº
«»«»
¬¼¬¼
so
12
xx= with
2
x free. Take
2
1x= and
2
1.
1
ªº
=«»
¬¼
v
For the initial condition 3
(0) ,
2
ªº
=«»
¬¼
x find
1
c and
2
c such that
11 2 2
(0) :+=vvxcc
12
113 10 12
(0) 312 0 1 72
/
ªºª º
=
ªº
«»« »
¬¼ /
¬¼¬ ¼
vvx
Thus
12
12 72,=/, = /cc
and
46
11
17
() .
31
22
ª
ºªº
=+
«
»«»
¬
¼¬¼
x
tt
tee
Since both eigenvalues are positive, the origin is a repellor of the dynamical system described by
.=xxA
The direction of greatest repulsion is the line through
2
v and the origin.
6. 12
,
34
ªº
=«»
¬¼
A det
2
( ) 3 2( 1)( 2) 0.=++=+ +=AI
Eigenvalues:
1
and
2.
For λ = –2: 320 1230
,
320 0 00
−−/
ªºª º
«»« »
¬¼¬ ¼
so
12
(2 3)xx=/ with
2
x free. Take
2
3x= and
1
2.
3
ª
º
=
«
»
¬
¼
v
For λ = –1: 220 110
,
330 000
−−
ªºªº
«»«»
¬¼¬¼
so
12
xx= with
2
x free. Take
2
1x= and
2
1.
1
ªº
=«»
¬¼
v
For the initial condition 3
(0) ,
2
ªº
=«»
¬¼
x find
1
c and
2
c such that
11 2 2
(0)cc+=vvx
:
12
213 10 1
[(0)]
312 0 1 5
ªºª º
=«»« »
¬¼¬ ¼
vvx
5.7 Solutions 331
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Thus
12
15,=,=cc
and
2
21
() 5 .
31
−−
ªº ªº
=+
«» «»
¬¼ ¬¼
x
tt
te e
Since both eigenvalues are negative, the origin is an attractor of the dynamical system described by
.=xxA
The direction of greatest attraction is the line through
1
v and the origin.
7. From Exercise 5, 71
,
33
ª
º
=
«
»
¬
¼
A with eigenvectors
1
1
3
ª
º
=
«
»
¬
¼
v and
2
1
1
ª
º
=
«
»
¬
¼
v corresponding to
eigenvalues 4 and 6 respectively. To decouple the equation
,=xxA
set
12
11
[]
31
ªº
==
«»
¬¼
vvP and let
40
,
06
ªº
=«»
¬¼
D so that
1
APDP
=
and
1
.
=DPAP
Substituting
() ()tPt=xy
into
A
=
xx
we have
1
() () ()
== =yy yy
dPAPPDPPPD
dt
Since P has constant entries,
() (()),=yy
dd
dt dt
PP
so that left-multiplying the equality
(())=yy
d
dt
PPD
by
1
P
yields ,=yyD or
11
22
() ()
40
() ()
06
ªº ªºªº
=
«» «»
«»
¬¼¬¼ ¬¼
yt yt
yt yt
8. From Exercise 6, 12
,
34
ª
º
=
«
»
¬
¼
A with eigenvectors
1
2
3
ª
º
=
«
»
¬
¼
v and
2
1
1
ª
º
=
«
»
¬
¼
v corresponding to
eigenvalues 2 and 1 respectively. To decouple the equation ,=xxA set
12
21
31
P
ªº
«»
¬¼
ªº
==
«»
¬¼
vv
and let 20
,
01
ªº
=«»
¬¼
D so that
1
APDP
= and
1
.
=DPAP
Substituting () ()tPt=xy into A=xx
we have
1
() () ()
dPAPPDPPPD
dt
== =yy yy
Since P has constant entries,
()
() (),=yy
dd
dt dt
PP
so that left-multiplying the equality
()
()
d
dt
PPD=yy
by
1
P
yields ,=yyD or
11
22
() ()
20
() ()
01
ªº ªºªº
=
«» «»
«»
¬¼¬¼ ¬¼
yt yt
yt yt
9. 32
.
11
ªº
=«»
−−
¬¼
A An eigenvalue of A is
2i+
with corresponding eigenvector 1.
1
ªº
=«»
¬¼
vi The
complex eigenfunctions
t
e
λ
v and
λ
v
t
e form a basis for the set of all complex solutions to
.=xxA
The general complex solution is
(2 ) (2 )
12
11
11
it it
ii
ce c e
+−−
+
ªº ªº
+
«» «»
¬¼ ¬¼
332 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
where
1
c and
2
c are arbitrary complex numbers. To build the general real solution, rewrite
(2 )it
e
+
v
as:
(2 ) 2 2
2
2
22
11
(cos sin )
11
cos cos sin sin
cos sin
cos sin sin cos
cos sin
+−−
−−
−−
ªº ªº
== +
«» «»
¬¼ ¬¼
ªº
+
=«»
+
¬¼
+
ªºªº
=+
«»«»
¬¼¬¼
v
it t it t
t
tt
ii
eeeetit
ti ti ti t
e
ti t
tt t t
ei e
tt
The general real solution has the form
22
12
cos sin sin cos
cos sin
tt
tt t t
cec e
tt
−−
+
ªºªº
+
«»«»
¬¼¬¼
where
1
c and
2
c now are real numbers. The trajectories are spirals because the eigenvalues are
complex. The spirals tend toward the origin because the real parts of the eigenvalues are negative.
10. 31
.
21
ªº
=«»
¬¼
A An eigenvalue of A is
2i+
with corresponding eigenvector 1.
2
+
ªº
=«»
¬¼
v
i
The complex
eigenfunctions
t
ev and
t
ev
form a basis for the set of all complex solutions to
.A=xx
The
general complex solution is
(2 ) (2 )
12
11
22
it it
ii
cec e
+
+
ªº ªº
+
«» «»
−−
¬¼ ¬¼
where
1
c and
2
c are arbitrary complex numbers. To build the general real solution, rewrite
(2 )it
e
+
v
as:
(2 ) 2 2
2
2
22
11
(cos sin )
22
cos cos sin sin
2cos 2 sin
cos sin sin cos
2cos 2sin
it t it t
t
tt
ii
eeeetit
ti ti ti t
e
tit
tt t t
ei e
tt
+
++
ªº ªº
== +
«» «»
−−
¬¼ ¬¼
ªº
+++
=«»
−−
¬¼
+
ªºªº
=+
«»«»
−−
¬¼¬¼
v
The general real solution has the form
22
12
cos sin sin cos
2cos 2sin
tt
tt t t
cec e
tt
+
ªºªº
+
«»«»
−−
¬¼¬¼
where
1
c and
2
c now are real numbers. The trajectories are spirals because the eigenvalues are
complex. The spirals tend away from the origin because the real parts of the eigenvalues are positive.
11. 39
.
23
−−
ªº
=«»
¬¼
A An eigenvalue of A is 3i with corresponding eigenvector 33.
2
+
ªº
=«»
¬¼
v
i
The
complex eigenfunctions
t
ev and
t
ev
form a basis for the set of all complex solutions to
.A=xx
The general complex solution is
5.7 Solutions 333
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
(3 ) ( 3 )
12
33 33
22
it it
ii
cec e
+−−
ªº ªº
+
«» «»
¬¼ ¬¼
where
1
c and
2
c are arbitrary complex numbers. To build the general real solution, rewrite
(3 )it
ev
as:
(3 ) 33(cos3 sin 3 )
2
3cos3 3sin3 3sin3 3cos3
2cos3 2sin3
it i
etit
tt t t
i
tt
+
ªº
=+
«»
¬¼
−− −+
ªºªº
=+
«»«»
¬¼¬¼
v
The general real solution has the form
12
3cos3 3sin3 3sin3 3cos3
2cos3 2sin3
tt t t
cc
tt
−− −+
ªºªº
+
«»«»
¬¼¬¼
where
1
c and
2
c now are real numbers. The trajectories are ellipses about the origin because the real
parts of the eigenvalues are zero.
12. 710
.
45
ªº
=«»
¬¼
A An eigenvalue of A is
12i+
with corresponding eigenvector 3.
2
ªº
=«»
¬¼
v
i
The
complex eigenfunctions
t
ev and
t
ev
form a basis for the set of all complex solutions to
.A=xx
The general complex solution is
(12) (12)
12
33
22
it it
ii
ce c e
+−−
+
ªº ªº
+
«» «»
¬¼ ¬¼
where
1
c and
2
c are arbitrary complex numbers. To build the general real solution, rewrite
(12)it
e
+
v
as:
(12) 3(cos 2 sin 2 )
2
3cos2 sin 2 3sin2 cos2
2cos2 2sin2
it t
tt
i
eetit
tt t t
ei e
tt
+
−−
ªº
=+
«»
¬¼
+
ªºªº
=+
«»«»
¬¼¬¼
v
The general real solution has the form
12
3cos2 sin2 3sin 2 cos2
2cos2 2sin2
tt
tt t t
cec e
tt
−−
+
ªºªº
+
«»«»
¬¼¬¼
where
1
c and
2
c now are real numbers. The trajectories are spirals because the eigenvalues are
complex. The spirals tend toward the origin because the real parts of the eigenvalues are negative.
13. 43
.
62
ªº
=«»
¬¼
A An eigenvalue of A is
13i+
with corresponding eigenvector 1.
2
+
ªº
=«»
¬¼
v
i
The
complex eigenfunctions
t
ev and
t
ev
form a basis for the set of all complex solutions to
.A=xx
The general complex solution is
(1 3 ) (1 3 )
12
11
22
it it
ii
cec e
+
+
ªº ªº
+
«» «»
¬¼ ¬¼
334 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
where
1
c and
2
c are arbitrary complex numbers. To build the general real solution, rewrite
(1 3 )it
e
+
v
as:
(1 3 ) 1(cos 3 sin 3 )
2
cos 3 sin 3 sin 3 cos3
2cos3 2sin3
it t
tt
i
eetit
tt t t
ei e
tt
++
ªº
=+
«»
¬¼
+
ªºªº
=+
«»«»
¬¼¬¼
v
The general real solution has the form
12
cos 3 sin 3 sin 3 cos3
2cos3 2sin3
tt
tt t t
cec e
tt
+
ªºªº
+
«»«»
¬¼¬¼
where
1
c and
2
c now are real numbers. The trajectories are spirals because the eigenvalues are
complex. The spirals tend away from the origin because the real parts of the eigenvalues are positive.
14. 21
.
82
ªº
=«»
¬¼
A An eigenvalue of A is 2i with corresponding eigenvector 1.
4
ªº
=«»
¬¼
v
i
The complex
eigenfunctions
t
ev and
t
ev
form a basis for the set of all complex solutions to
.A=xx
The
general complex solution is
(2 ) ( 2 )
12
11
44
it it
ii
cec e
+
ªº ªº
+
«» «»
¬¼ ¬¼
where
1
c and
2
c are arbitrary complex numbers. To build the general real solution, rewrite
(2 )it
ev
as:
(2 ) 1(cos 2 sin 2 )
4
cos 2 sin 2 sin 2 cos 2
4cos2 4sin2
it i
etit
tt t t
i
tt
ªº
=+
«»
¬¼
+
ªºªº
=+
«»«»
¬¼¬¼
v
The general real solution has the form
12
cos 2 sin 2 sin 2 cos 2
4cos2 4sin2
tt t t
cc
tt
+
ªºªº
+
«»«»
¬¼¬¼
where
1
c and
2
c now are real numbers. The trajectories are ellipses about the origin because the real
parts of the eigenvalues are zero.
15. [M]
8126
212.
7125
−− −
ªº
«»
=«»
«»
¬¼
A
The eigenvalues of A are:
ev = eig(A)=
1.0000
-1.0000
-2.0000
nulbasis(A-ev(1)*eye(3))
=
5.7 Solutions 335
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
-1.0000
0.2500
1.0000
so that
1
4
1
4
ªº
«»
=«»
«»
¬¼
v
nulbasis(A-ev(2)*eye(3))
=
-1.2000
0.2000
1.0000
so that
2
6
1
5
ªº
«»
=«»
«»
¬¼
v
nulbasis (A-ev(3)*eye(3))
=
-1.0000
0.0000
1.0000
so that
3
1
0
1
ªº
«»
=«»
«»
¬¼
v
Hence the general solution is
2
12 3
46 1
() 1 1 0 .
45 1
tt t
tc ec e c e
−−
−− −
ªº ªº ªº
«» «» «»
=+ +
«» «» «»
«» «» «»
¬¼ ¬¼ ¬¼
x
The origin is a saddle point.
A solution with
1
0c= is attracted to the origin while a solution with
23
0cc==
is repelled.
336 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
16. [M]
61116
254.
4510
−−
ªº
«»
=
«»
«»
−−
¬¼
A
The eigenvalues of A are:
ev = eig(A)=
4.0000
3.0000
2.0000
nulbasis(A-ev(1)*eye(3))
=
2.3333
-0.6667
1.0000
so that
1
7
2
3
ªº
«»
=
«»
«»
¬¼
v
nulbasis(A-ev(2)*eye(3))
=
3.0000
-1.0000
1.0000
so that
2
3
1
1
ªº
«»
=
«»
«»
¬¼
v
nulbasis(A-ev(3)*eye(3))
=
2.0000
0.0000
1.0000
so that
3
2
0
1
ªº
«»
=«»
«»
¬¼
v
Hence the general solution is
432
123
732
() 2 1 0 .
311
ttt
tc e c e c e
ª
ºªºªº
«
»«»«»
=++
«
»«»«»
«
»«»«»
¬
¼¬¼¬¼
x
The origin is a repellor,
because all eigenvalues are positive. All trajectories tend away from the origin.
5.7 Solutions 337
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
17. [M]
30 64 23
11 23 9 .
6154
ªº
«»
=−− −
«»
«»
¬¼
A
The eigenvalues of A are:
ev = eig(A)=
5.0000 + 2.0000i
5.0000 - 2.0000i
1.0000
nulbasis(A-ev(1)*eye(3))
=
7.6667 - 11.3333i
-3.0000 + 4.6667i
1.0000
so that
1
23 34
914
3
i
i
ªº
«»
=+
«»
«»
¬¼
v
nulbasis (A-ev(2)*eye(3))
=
7.6667 + 11.3333i
-3.0000 - 4.6667i
1.0000
so that
2
23 34
914
3
i
i
+
ªº
«»
=−−
«»
«»
¬¼
v
nulbasis (A-ev(3)*eye(3))
=
-3.0000
1.0000
1.0000
so that
3
3
1
1
ªº
«»
=«»
«»
¬¼
v
Hence the general complex solution is
(5 2 ) (5 2 )
123
23 34 23 34 3
() 9 14 9 14 1
331
it it t
ii
tc ie c ie c e
+
+
ªº ªº ªº
«» «» «»
=++−− +
«» «» «»
«» «» «»
¬¼ ¬¼ ¬¼
x
Rewriting the first eigenfunction yields
555
23 34 23cos 2 34sin 2 23sin 2 34cos 2
914 (cos2 sin2) 9cos2 14sin2 9sin2 14cos2
33cos23sin2
ttt
itttt
ie t i t t te i t te
tt
+
ªº ª ºª º
«» « »« »
++=−− ++
«» « »« »
«» « »« »
¬¼ ¬ ¼¬ ¼
Hence the general real solution is
338 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
55
123
23cos 2 34sin 2 23sin 2 34cos 2 3
() 9cos2 14sin2 9sin2 14cos2 1
3cos2 3sin2 1
ttt
tt t t
tc t te c t te c e
tt
+−−
ªºªºªº
«»«»«»
=−− +++
«»«»«»
«»«»«»
¬¼¬¼¬¼
x
where
12
,,cc and
3
c are real. The origin is a repellor, because the real parts of all eigenvalues are
positive. All trajectories spiral away from the origin.
18. [M]
53 30 2
90 52 3 .
20 10 2
A
−−
ªº
«»
=−−
«»
«»
¬¼
The eigenvalues of A are:
ev = eig(A)=
-7.0000
5.0000 + 1.0000i
5.0000 - 1.0000i
nulbasis(A-ev(1)*eye(3))
=
0.5000
1.0000
0.0000
so that
1
1
2
0
ªº
«»
=«»
«»
¬¼
v
nulbasis(A-ev(2)*eye(3))
=
0.6000 + 0.2000i
0.9000 + 0.3000i
1.0000
so that
2
62
93
10
i
i
+
ªº
«»
=+
«»
«»
¬¼
v
nulbasis(A-ev(3)*eye(3))
=
0.6000 - 0.2000i
0.9000 - 0.3000i
1.0000
so that
3
62
93
10
i
i
ªº
«»
=
«»
«»
¬¼
v
Hence the general complex solution is
7(5)(5)
12 3
162 62
() 2 9 3 9 3
010 10
titit
ii
tc e c ie c ie
+
+
ªº ª º ª º
«» « » « »
=+++
«» « » « »
«» « » « »
¬¼ ¬ ¼ ¬ ¼
x
5.7 Solutions 339
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Rewriting the second eigenfunction yields
555
62 6cos 2sin 6sin 2cos
93 (cos sin) 9cos 3sin 9sin 3cos
10 10cos 10sin
++
ªº ª º ª º
«» « » « »
++=++
«» « » « »
«» « » « »
¬¼ ¬ ¼ ¬ ¼
ttt
itttt
ie t i t t te i t te
tt
Hence the general real solution is
755
12 3
16cos2sin6sin2cos
() 2 9cos 3sin 9sin 3cos
010cos 10sin
ttt
tt t t
tc e c t te c t te
tt
+
ªº ª º ª º
«» « » « »
=+++
«» « » « »
«» « » « »
¬¼ ¬ ¼ ¬ ¼
x
where
12
,,cc and
3
c are real. When
23
0cc==
the trajectories tend toward the origin, and in other
cases the trajectories spiral away from the origin.
19. [M] Substitute
121
15 13 4,=/, =/, =RRC
and
2
3C= into the formula for A given in Example 1, and
use a matrix program to find the eigenvalues and eigenvectors:
11 2 1
234 1 3
525
11 2 2
A
/
ªº ªº ªº
=,=.: = , =.: =
«» «» «»
¬¼ ¬¼ ¬¼
vv
The general solution is thus
525
12
13
() .
22
..
ªº ª º
=+
«» « »
¬¼ ¬ ¼
x
tt
tc e c e The condition 4
(0) 4
ªº
=«»
¬¼
x implies that
1
2
13 4
.
22 4
ªº
«»
«»
«»
¬¼
ªºªº
=
«»«»
¬¼¬¼
c
c
By a matrix program,
1
52c=/ and
2
12,=/c so that
1525
2
() 13
51
()
() 22
22
tt
vt
te e
vt
..
ªº ªº ªº
==
«» «» « »
¬¼ ¬ ¼¬¼
x
20. [M] Substitute
121
115 13 9,RRC=/ , =/, = and
2
2C= into the formula for A given in Example 1,
and use a matrix program to find the eigenvalues and eigenvectors:
11 2 2
213 1 2
125
32 32 3 3
A
/
ªº ªº ªº
=,=:= , =.: =
«» «» «»
//
¬¼ ¬¼ ¬¼
vv
The general solution is thus
25
12
12
() .
33
−−.
ªº ª º
=+
«» « »
¬¼ ¬ ¼
x
tt
tc e c e The condition 3
(0) 3
ªº
=«»
¬¼
x implies
that
1
2
12 3
.
33 3
ªº
«»
«»
«»
¬¼
ªºªº
=
«»«»
¬¼¬¼
c
c
By a matrix program,
1
53c=/ and
2
23,=/c so that
125
2
() 12
52
()
() 33
33
tt
vt
te e
vt
−−.
ªº ªº ªº
==
«» «» « »
¬¼ ¬ ¼¬¼
x
21. [M] 18
.
55
−−
ªº
=«»
¬¼
A Using a matrix program we find that an eigenvalue of A is
36i+
with
corresponding eigenvector 26.
5
+
ªº
=«»
¬¼
v
i
The conjugates of these form the second
eigenvalue-eigenvector pair. The general complex solution is
340 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
(36) (36)
12
26 26
() 55
it it
ii
tc e c e
+−−
+
ªº ªº
=+
«» «»
¬¼ ¬¼
x
where
1
c and
2
c are arbitrary complex numbers. Rewriting the first eigenfunction and taking its real
and imaginary parts, we have
(36) 3
33
26 (cos 6 sin 6 )
5
2cos6 6sin6 2sin6 6cos6
5cos6 5sin6
+
−−
+
ªº
=+
«»
¬¼
+
ªºªº
=+
«»«»
¬¼¬¼
vit t
tt
i
eetit
tt t t
ei e
tt
The general real solution has the form
33
12
2cos6 6sin6 2sin6 6cos6
() 5cos6 5sin6
tt
tt t t
tc e c e
tt
−−
+
ªºªº
=+
«»«»
¬¼¬¼
x
where
1
c and
2
c now are real numbers. To satisfy the initial condition 0
(0) ,
15
ªº
=«»
¬¼
x we solve
12
260
5015
cc
ªº ªº ª º
+=
«» «» « »
¬¼ ¬¼ ¬ ¼
to get
12
31.=, =cc We now have
33 3
() 2cos6 6sin6 2sin6 6cos6 20sin6
() 3
() 5cos6 5sin6 15cos6 5sin6
−− −
+
ªº ªºªºª º
== =
«» «»«»« »
¬¼¬¼¬ ¼
¬¼
x
Ltt t
C
it tt t t t
tee e
vt tttt
22. [M] 02
.
48
ªº
=«»
..
¬¼
A Using a matrix program we find that an eigenvalue of A is
48i.+.
with
corresponding eigenvector 12.
1
−−
ªº
=«»
¬¼
v
i
The conjugates of these form the second eigenvalue-
eigenvector pair. The general complex solution is
(48) (48)
12
12 12
() 11
it it
ii
tc e c e
.+. ..
−− +
ªº ªº
=+
«» «»
¬¼ ¬¼
x
where
1
c and
2
c are arbitrary complex numbers. Rewriting the first eigenfunction and taking its real
and imaginary parts, we have
(48) 4
44
12 (cos 8 sin 8 )
1
cos 8 2sin 8 sin 8 2 cos 8
cos 8 sin 8
it t
tt
i
eetit
tt t t
ei e
tt
.+. .
..
−−
ªº
=.+.
«»
¬¼
.+ . ..
ªºªº
=+
«»«»
..
¬¼¬¼
v
The general real solution has the form
44
12
cos 8 2sin 8 sin 8 2cos 8
() cos 8 sin 8
tt
tt t t
tc e c e
tt
..
.+ . ..
ªºªº
=+
«»«»
..
¬¼¬¼
x
5.8 Solutions 341
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
where
1
c and
2
c now are real numbers. To satisfy the initial condition 0
(0) ,
12
ªº
=«»
¬¼
x we solve
12
12 0
1012
cc
−−
ªºªº ªº
+=
«»«» «»
¬¼ ¬¼¬¼
to get
12
12 6.=, =cc We now have
444
() cos 8 2sin 8 sin 8 2cos 8 30sin 8
() 12 6
() cos 8 sin 8 12cos 8 6sin 8
Lttt
C
it tt t t t
te e e
vt tttt
...
.+ . ...
ªº ªºªºªº
== =
«» «»«»«»
....
¬¼¬¼¬ ¼
¬¼
x
5.8 SOLUTIONS
1. The vectors in the given sequence approach an eigenvector
1
.v The last vector in the sequence,
4
1,
3326
ªº
=«»
.
¬¼
x is probably the best estimate for
1
.v To compute an estimate for
1
,λ examine
4
49978 .
16652
.
ªº
=«»
.
¬¼
xA This vector is approximately
11
.λv From the first entry in this vector, an estimate
of
1
λ is 4.9978.
2. The vectors in the given sequence approach an eigenvector
1
.v The last vector in the sequence,
4
2520 ,
1
.
ªº
=«»
¬¼
x is probably the best estimate for
1
.v To compute an estimate for
1
,λ examine
4
12536 .
50064
.
ªº
=«»
.
¬¼
xA This vector is approximately
11
.λv From the second entry in this vector, an
estimate of
1
λ is 5.0064.
3. The vectors in the given sequence approach an eigenvector
1
.v The last vector in the sequence,
4
5188 ,
1
.
ªº
=«»
¬¼
x is probably the best estimate for
1
.v To compute an estimate for
1
,λ examine
4
4594 .
9075
.
ªº
=«»
.
¬¼
xA This vector is approximately
11
.λv From the second entry in this vector, an estimate
of
1
λ is .9075.
4. The vectors in the given sequence approach an eigenvector
1
.v The last vector in the sequence,
4
1,
7502
ªº
=«»
.
¬¼
x is probably the best estimate for
1
.v To compute an estimate for
1
,λ examine
4
4012 .
3009
.
ªº
=«»
.
¬¼
xA This vector is approximately
11
.λv From the first entry in this vector, an estimate
of
1
λ is
4012..
342 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
5. Since
5
24991
31241
Aªº
=«»
¬¼
x is an estimate for an eigenvector, the vector
24991 7999
1
31241 1
31241
.
ªºªº
==
«»«»
¬¼¬¼
v is a vector with a 1 in its second entry that is close to an
eigenvector of A. To estimate the dominant eigenvalue
1
λ of A, compute 40015 .
50020
.
ªº
=«»
.
¬¼
vA From the
second entry in this vector, an estimate of
1
λ is
50020..
6. Since
5
2045
4093
A
ªº
=«»
¬¼
x is an estimate for an eigenvector, the vector 2045 4996
1
4093 1
4093
−−.
ªºª º
==
«»« »
¬¼¬ ¼
v is
a vector with a 1 in its second entry that is close to an eigenvector of A. To estimate the dominant
eigenvalue
1
λ of A, compute 20008 .
40024
.
ªº
=«»
.
¬¼
vA From the second entry in this vector, an estimate of
1
λ is 4.0024.
7. [M]
0
67 1
.
85 0
ªº ªº
=,=
«» «»
¬¼ ¬¼
xA The data in the table below was calculated using Mathematica, which
carried more digits than shown here.
k 0 1 2 3 4 5
k
x 1
0
ªº
«»
¬¼
75
1
.
ªº
«»
¬¼
1
9565
ªº
«»
.
¬¼
9932
1
.
ª
º
«
»
¬
¼ 1
9990
ª
º
«
»
.
¬
¼ .9998
1
ªº
«»
¬¼
k
Ax 6
8
ªº
«»
¬¼
11 5
11 0
.
ªº
«»
.
¬¼
12 6957
12 7826
.
ªº
«»
.
¬¼
12 9592
12 9456
.
ª
º
«
»
.
¬
¼ 12 9927
12 9948
.
ª
º
«
»
.
¬
¼ 12 9990
12 9987
.
ªº
«»
.
¬¼
k
µ
8 11.5 12.7826 12.9592 12.9948 12.9990
The actual eigenvalue is 13.
8. [M]
0
21 1
.
45 0
ªº ªº
=,=
«» «»
¬¼ ¬¼
xA The data in the table below was calculated using Mathematica, which
carried more digits than shown here.
k 0 1 2 3 4 5
k
x 1
0
ªº
«»
¬¼
5
1
.
ªº
«»
¬¼
2857
1
.
ªº
«»
¬¼
2558
1
.
ª
º
«
»
¬
¼ 2510
1
.
ª
º
«
»
¬
¼ .2502
1
ª
º
«
»
¬
¼
k
Ax 2
4
ªº
«»
¬¼
2
7
ªº
«»
¬¼
15714
61429
.
ªº
«»
.
¬¼
15116
60233
.
ª
º
«
»
.
¬
¼ 15019
60039
.
ª
º
«
»
.
¬
¼ 15003
60006
.
ª
º
«
»
.
¬
¼
k
µ
4 7 6.1429 6.0233 6.0039 6.0006
The actual eigenvalue is 6.
5.8 Solutions 343
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
9. [M]
0
8012 1
121 0.
030 0
ªºªº
«»«»
=,=
«»«»
«»«»
¬¼¬¼
xA
The data in the table below was calculated using Mathematica,
which carried more digits than shown here.
k 0 1 2 3 4 5 6
k
x
1
0
0
ªº
«»
«»
«»
¬¼
1
125
0
ªº
«»
.
«»
«»
¬¼
1
0938
0469
ªº
«»
.
«»
«»
.
¬¼
1
1004
0328
ª
º
«
»
.
«
»
«
»
.
¬
¼
1
0991
0359
ª
º
«
»
.
«
»
«
»
.
¬
¼
1
0994
0353
ªº
«»
.
«»
«»
.
¬¼
1
0993
0354
ªº
«»
.
«»
«»
.
¬¼
k
Ax
8
1
0
ªº
«»
«»
«»
¬¼
8
75
375
ªº
«»
.
«»
«»
.
¬¼
85625
8594
2812
.
ªº
«»
.
«»
«»
.
¬¼
83942
8321
3011
.
ª
º
«
»
.
«
»
«
»
.
¬
¼
84304
8376
2974
.
ª
º
«
»
.
«
»
«
»
.
¬
¼
84233
8366
2981
.
ª
º
«
»
.
«
»
«
»
.
¬
¼
84246
8368
2979
.
ªº
«»
.
«»
«»
.
¬¼
k
µ
8 8 8.5625 8.3942 8.4304 8.4233 8.4246
Thus
5
8 4233
µ
=. and
6
84246.=.
µ
The actual eigenvalue is
(7 97) 2,+/
or 8.42443 to five
decimal places.
10. [M]
0
12 2 1
11 9 0.
01 9 0
ªºªº
«»«»
=,=
«»«»
«»«»
¬¼¬¼
xA
The data in the table below was calculated using Mathematica,
which carried more digits than shown here.
k 0 1 2 3 4 5 6
k
x
1
0
0
ªº
«»
«»
«»
¬¼
1
1
0
ª
º
«
»
«
»
«
»
¬
¼
1
6667
3333
ªº
«»
.
«»
«»
.
¬¼
3571
1
7857
.
ª
º
«
»
«
»
«
»
.
¬
¼
0932
1
9576
.
ª
º
«
»
«
»
«
»
.
¬
¼
0183
1
9904
.
ª
º
«
»
«
»
«
»
.
¬
¼
0038
1
9982
.
ªº
«»
«»
«»
.
¬¼
k
Ax
1
1
0
ªº
«»
«»
«»
¬¼
3
2
1
ª
º
«
»
«
»
«
»
¬
¼
1 6667
4 6667
36667
.
ªº
«»
.
«»
«»
.
¬¼
7857
84286
80714
.
ª
º
«
»
.
«
»
«
»
.
¬
¼
1780
9 7119
9 6186
.
ª
º
«
»
.
«
»
«
»
.
¬
¼
0375
9 9319
9 9136
.
ª
º
«
»
.
«
»
«
»
.
¬
¼
0075
9 9872
9 9834
.
ªº
«»
.
«»
«»
.
¬¼
k
µ
1 3 4.6667 8.4286 9.7119 9.9319 9.9872
Thus
5
99319=.
µ
and
6
99872.=.
µ
The actual eigenvalue is 10.
11. [M]
0
52 1
.
22 0
ªº ªº
=,=
«» «»
¬¼ ¬¼
xA The data in the table below was calculated using Mathematica, which
carried more digits than shown here.
k 0 1 2 3 4
k
x 1
0
ªº
«»
¬¼
1
4
ªº
«»
.
¬¼
1
4828
ª
º
«
»
.
¬
¼ 1
4971
ª
º
«
»
.
¬
¼ 1
4995
ª
º
«
»
.
¬
¼
344 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
k
Ax 5
2
ªº
«»
¬¼
58
28
.
ªº
«»
.
¬¼
59655
29655
.
ªº
«»
.
¬¼
59942
29942
.
ª
º
«
»
.
¬
¼ 59990
29990
.
ª
º
«
»
.
¬
¼
k
µ
5 5.8 5.9655 5.9942 5.9990
()
k
Rx 5 5.9655 5.9990 5.99997 5.9999993
The actual eigenvalue is 6. The bottom two columns of the table show that ()
k
Rx estimates the
eigenvalue more accurately than .
k
µ
12. [M]
0
32 1
.
22 0
ªºªº
=,=
«»«»
¬¼¬¼
xA The data in the table below was calculated using Mathematica,
which carried more digits than shown here.
k 0 1 2 3 4
k
x 1
0
ªº
«»
¬¼
1
6667
ªº
«»
.
¬¼
1
4615
ª
º
«
»
.
¬
¼ 1
5098
ª
º
«
»
.
¬
¼ 1
4976
ªº
«»
.
¬¼
k
Ax 3
2
ªº
«»
¬¼
43333
20000
.
ª
º
«
»
.
¬
¼ 39231
20000
.
ª
º
«
»
.
¬
¼ 40196
20000
.
ª
º
«
»
.
¬
¼ 39951
20000
.
ªº
«»
.
¬¼
k
µ
3
43333.
39231.
40196.
39951.
()
k
Rx
3
39231.
39951.
39997.
3 99998.
The actual eigenvalue is
4.
The bottom two columns of the table show that ()
k
Rx estimates the
eigenvalue more accurately than .
k
µ
13. If the eigenvalues close to 4 and
4
have different absolute values, then one of these is a strictly
dominant eigenvalue, so the power method will work. But the power method depends on powers of
the quotients
21
λ/λ and
31
λ/λ going to zero. If
21
|
λ/λ
|
is close to 1, its powers will go to zero
slowly, and the power method will converge slowly.
14. If the eigenvalues close to 4 and
4
have the same absolute value, then neither of these is a strictly
dominant eigenvalue, so the power method will not work. However, the inverse power method may
still be used. If the initial estimate is chosen near the eigenvalue close to 4, then the inverse power
method should produce a sequence that estimates the eigenvalue close to 4.
15. Suppose
,=λxxA
with
0.x
For any
().,=λ−xx xAI
αα α
If
α
is not an eigenvalue of A, then
AI
α
is invertible and
α
λ−
is not 0; hence
111
()() and ()()AI AI
αα α α
−−
=−λ− λ=xxxx
This last equation shows that x is an eigenvector of
1
()AI
α
corresponding to the eigenvalue
1
().
λ−
α
16. Suppose that
µ
is an eigenvalue of
1
()AI
α
with corresponding eigenvector x. Since
1
() ,
=xxAI
α
µ
()()()()()()AI A I A
α
µµ
α
µµ
α
µ
===xxxxxx
5.8 Solutions 345
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Solving this equation for Ax, we find that
11
()
§· § ·
=+=+
¨¸ ¨ ¸
©¹ © ¹
xxx xA
αµα
µµ
Thus
(1 )
α
µ
λ=+/
is an eigenvalue of A with corresponding eigenvector x.
17. [M]
0
10 8 4 1
813 4 0 33.
454 0
−−
ªºªº
«»«»
=,= ,=.
«»«»
«»«»
¬¼¬¼
xA
α
The data in the table below was calculated using
Mathematica, which carried more digits than shown here.
k 0 1 2
k
x
1
0
0
ªº
«»
«»
«»
¬¼
1
7873
0908
ªº
«»
.
«»
«»
.
¬¼
1
7870
0957
ª
º
«
»
.
«
»
«
»
.
¬
¼
k
y
26 0552
20 5128
23669
.
ªº
«»
.
«»
«»
.
¬¼
47 1975
37 1436
45187
.
ªº
«»
.
«»
«»
.
¬¼
47 1233
37 0866
45083
.
ª
º
«
»
.
«
»
«
»
.
¬
¼
k
µ
26.0552 47.1975 47.1233
k
ν
3.3384 3.32119 3.3212209
Thus an estimate for the eigenvalue to four decimal places is 3.3212. The actual eigenvalue is
(25 337) 2,/
or 3.3212201 to seven decimal places.
18. [M]
0
8012 1
121 0 14.
030 0
ªºªº
«»«»
=,= ,=.
«»«»
«»«»
¬¼¬¼
xA
α
The data in the table below was calculated using
Mathematica, which carried more digits than shown here.
k 0 1 2 3 4
k
x
1
0
0
ªº
«»
«»
«»
¬¼
1
3646
7813
ªº
«»
.
«»
«»
.
¬¼
1
3734
7855
ª
º
«
»
.
«
»
«
»
.
¬
¼
1
3729
7854
ª
º
«
»
.
«
»
«
»
.
¬
¼
1
3729
7854
ª
º
«
»
.
«
»
«
»
.
¬
¼
k
y
40
14 5833
31 25
ªº
«»
.
«»
«»
.
¬¼
38 125
14 2361
29 9479
.
ªº
«»
.
«»
«»
.
¬¼
41 1134
15 3300
32 2888
.
ª
º
«
»
.
«
»
«
»
.
¬
¼
40 9243
15 2608
32 1407
.
ª
º
«
»
.
«
»
«
»
.
¬
¼
40 9358
15 2650
32 1497
.
ª
º
«
»
.
«
»
«
»
.
¬
¼
k
µ
40
38 125.
41 1134.
40 9243.
40 9358.
k
ν
1375.
1 42623.
142432.
142444.
1 42443.
346 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Thus an estimate for the eigenvalue to four decimal places is
14244..
The actual eigenvalue is
(7 97) 2,/
or
1 424429.
to six decimal places.
19. [M]
0
10 7 8 7 1
756 5 0
.
86109 0
75910 0
ªºªº
«»«»
«»«»
=,=
«»«»
«»«»
«»«»
¬¼¬¼
xA
(a) The data in the table below was calculated using Mathematica, which carried more digits than
shown here.
k 0 1 2 3
k
x
1
0
0
0
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
1
7
8
7
ªº
«»
.
«»
«»
.
«»
.
«»
¬¼
988679
709434
1
932075
.
ª
º
«
»
.
«
»
«
»
«
»
.
«
»
¬
¼
961467
691491
1
942201
.
ª
º
«
»
.
«
»
«
»
«
»
.
«
»
¬
¼
k
Ax
10
7
8
7
ªº
«»
«»
«»
«»
«»
¬¼
26 2
18 8
26 5
24 7
.
ªº
«»
.
«»
«»
.
«»
.
«»
¬¼
29 3774
21 1283
30 5547
28 7887
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
29 0505
20 8987
30 3205
28 6097
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
k
µ
10 26.5 30.5547 30.3205
k 4 5 6 7
k
x
958115
689261
1
943578
.
ªº
«»
.
«»
«»
«»
.
«»
¬¼
957691
688978
1
943755
.
ªº
«»
.
«»
«»
«»
.
«»
¬¼
957637
688942
1
943778
.
ª
º
«
»
.
«
»
«
»
«
»
.
«
»
¬
¼
957630
688938
1
943781
.
ª
º
«
»
.
«
»
«
»
«
»
.
«
»
¬
¼
k
Ax
29 0110
20 8710
30 2927
28 5889
.
ªº
«»
.
«»
«»
.
«»
.
«»
¬¼
29 0060
20 8675
30 2892
28 5863
.
ªº
«»
.
«»
«»
.
«»
.
«»
¬¼
29 0054
20 8671
30 2887
28 5859
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
29 0053
20 8670
30 2887
28 5859
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
k
µ
30.2927 30.2892 30.2887 30.2887
Thus an estimate for the eigenvalue to four decimal places is 30.2887. The actual eigenvalue is
30.2886853 to seven decimal places. An estimate for the corresponding eigenvector is
957630
688938 .
1
943781
.
ªº
«»
.
«»
«»
«»
.
«»
¬¼
5.8 Solutions 347
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
(b) The data in the table below was calculated using Mathematica, which carried more digits than
shown here.
k 0 1 2 3 4
k
x
1
0
0
0
ªº
«»
«»
«»
«»
«»
¬¼
609756
1
243902
146341
.
ªº
«»
«»
«»
.
«»
.
«»
¬¼
604007
1
251051
148899
.
ª
º
«
»
«
»
«
»
.
«
»
.
«
»
¬
¼
603973
1
251134
148953
.
ª
º
«
»
«
»
«
»
.
«
»
.
«
»
¬
¼
603972
1
251135
148953
.
ª
º
«
»
«
»
«
»
.
«
»
.
«
»
¬
¼
k
y
25
41
10
6
ªº
«»
«»
«»
«»
«»
¬¼
59 5610
98 6098
24 7561
14 6829
.
ªº
«»
.
«»
«»
.
«»
.
«»
¬¼
59 5041
98 5211
24 7420
14 6750
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
59 5044
98 5217
24 7423
14 6751
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
59 5044
98 5217
24 7423
14 6751
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
k
µ
41
98.6098 98.5211 98.5217 98.5217
k
ν
0243902.
.0101410 .0101501 .0101500 .0101500
Thus an estimate for the eigenvalue to five decimal places is .01015. The actual eigenvalue is
.01015005 to eight decimal places. An estimate for the corresponding eigenvector is
603972
1.
251135
148953
.
ªº
«»
«»
«»
.
«»
.
«»
¬¼
20. [M]
0
1232 1
2121311 0
.
2302 0
4572 0
ªºªº
«»«»
«»«»
=,=
«»«»
«»«»
«»«»
¬¼¬¼
xA
(a) The data in the table below was calculated using Mathematica, which carried more digits than
shown here.
k 0 1 2 3 4
k
x
1
0
0
0
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
25
5
5
1
.
ªº
«»
.
«»
«»
.
«»
«»
¬¼
159091
1
272727
181818
.
ª
º
«
»
«
»
«
»
.
«
»
.
«
»
¬
¼
187023
1
170483
442748
.
ª
º
«
»
«
»
«
»
.
«
»
.
«
»
¬
¼
184166
1
180439
402197
.
ª
º
«
»
«
»
«
»
.
«
»
.
«
»
¬
¼
k
Ax
1
2
2
4
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
175
11
3
2
.
ªº
«»
«»
«»
«»
«»
¬¼
334091
17 8636
304545
790909
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
358397
19 4606
351145
782697
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
352988
19 1382
343606
780413
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
k
µ
4 11 17.8636 19.4606 19.1382
348 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
k 5 6 7 8 9
k
x
184441
1
179539
407778
.
ªº
«»
«»
«»
.
«»
.
«»
¬¼
184414
1
179622
407021
.
ªº
«»
«»
«»
.
«»
.
«»
¬¼
184417
1
179615
407121
.
ª
º
«
»
«
»
«
»
.
«
»
.
«
»
¬
¼
184416
1
179615
407108
.
ª
º
«
»
«
»
«
»
.
«
»
.
«
»
¬
¼
184416
1
179615
407110
.
ªº
«»
«»
«»
.
«»
.
«»
¬¼
k
Ax
353861
19 1884
344667
781010
.
ªº
«»
.
«»
«»
.
«»
.
«»
¬¼
353732
19 1811
344521
780905
.
ªº
«»
.
«»
«»
.
«»
.
«»
¬¼
353750
19 1822
344541
780921
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
353748
19 1820
344538
780919
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
353748
19 1820
344539
780919
.
ªº
«»
.
«»
«»
.
«»
.
¬¼
k
µ
19.1884 19.1811 19.1822 19.1820 19.1820
Thus an estimate for the eigenvalue to four decimal places is 19.1820. The actual eigenvalue is
19.1820368 to seven decimal places. An estimate for the corresponding eigenvector is
184416
1.
179615
407110
.
ªº
«»
«»
«»
.
«»
.
«»
¬¼
(b) The data in the table below was calculated using Mathematica, which carried more digits than
shown here.
k 0 1 2
k
x
1
0
0
0
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
1
226087
921739
660870
ªº
«»
.
«»
«»
.
«»
.
«»
¬¼
1
222577
917970
660496
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
k
y
115
26
106
76
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
81 7304
18 1913
75 0261
53 9826
.
ªº
«»
.
«»
«»
.
«»
.
«»
¬¼
81 9314
18 2387
75 2125
54 1143
.
ª
º
«
»
.
«
»
«
»
.
«
»
.
«
»
¬
¼
k
µ
115 81.7304 81.9314
k
ν
.00869565 .0122353 .0122053
Thus an estimate for the eigenvalue to four decimal places is .0122. The actual eigenvalue is
.01220556 to eight decimal places. An estimate for the corresponding eigenvector is
1
222577 .
917970
660496
ªº
«»
.
«»
«»
.
«»
.
«»
¬¼
Chapter 5 Supplementary Exercises 349
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
21. (a) 80 5
.
02 5
..
ªºªº
=,=
«»«»
..
¬¼¬¼
xA Here is the sequence
k
Ax
for
15:=,k…
432256204816384
102004000800016
.. . . .
ªºª ºª ºª ºª º
,, , ,
«»« »« »« »« »
.. . . .
¬¼¬ ¼¬ ¼¬ ¼¬ ¼
Notice that
5
Ax
is approximately
4
8( )..xA
Conclusion: If the eigenvalues of A are all less than 1 in magnitude, and if
0,x
then
k
Ax
is
approximately an eigenvector for large k.
(b) 10 5
.
08 5
.
ªºªº
=,=
«»«»
..
¬¼¬¼
xA Here is the sequence
k
Ax
for
15:=,k…
55 5 5 5
432256204816384
.. . . .
ªºª ºª ºª ºª º
,, , ,
«»« »« »« »« »
.. . . .
¬¼¬ ¼¬ ¼¬ ¼¬ ¼
Notice that
k
Ax
seems to be converging to 5.
0
.
ª
º
«
»
¬
¼
Conclusion: If the strictly dominant eigenvalue of A is 1, and if x has a component in the
direction of the corresponding eigenvector, then
{}
k
Ax
will converge to a multiple of that
eigenvector.
c. 80 5
.
02 5
.
ªºªº
=,=
«»«»
.
¬¼¬¼
xA Here is the sequence
k
Ax
for
15:=,k…
432256204816384
12 4 8 16
ªºª ºª ºª ºª º
,, , ,
«»« »« »« »« »
¬¼¬ ¼¬ ¼¬ ¼¬ ¼
Notice that the distance of
k
Ax
from either eigenvector of A is increasing rapidly as k increases.
Conclusion: If the eigenvalues of A are all greater than 1 in magnitude, and if x is not an
eigenvector, then the distance from
k
Ax
to the nearest eigenvector will increase as
.→∞k
Chapter 5 SUPPLEMENTARY EXERCISES
1. a. True. If A is invertible and if
1
A
=
xx
for some nonzero x, then left-multiply by
1
A
to obtain
1
,
=xxA
which may be rewritten as
1
1.
=xxA Since x is nonzero, this shows 1 is an
eigenvalue of
1
.
A
b. False. If A is row equivalent to the identity matrix, then A is invertible. The matrix in Example 4
of Section 5.3 shows that an invertible matrix need not be diagonalizable. Also, see Exercise 31
in Section 5.3.
c. True. If A contains a row or column of zeros, then A is not row equivalent to the identity matrix
and thus is not invertible. By the Invertible Matrix Theorem (as stated in Section 5.2), 0 is an
eigenvalue of A.
350 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
d. False. Consider a diagonal matrix D whose eigenvalues are 1 and 3, that is, its diagonal entries
are 1 and 3. Then
2
D
is a diagonal matrix whose eigenvalues (diagonal entries) are 1 and 9. In
general, the eigenvalues of
2
A
are the squares of the eigenvalues of A.
e. True. Suppose a nonzero vector x satisfies
,=xxA
λ
then
22
() ()AAAA A
λλ λ
====xx xxx
This shows that x is also an eigenvector for
2
A
f. True. Suppose a nonzero vector x satisfies ,=xxA
λ
then left-multiply by
1
A
to obtain
11
() .
−−
==xxxAA
λλ
Since A is invertible, the eigenvalue λ is not zero. So
11
,
−−
λ=xxA
which
shows that x is also an eigenvector of
1
.
A
g. False. Zero is an eigenvalue of each singular square matrix.
h. True. By definition, an eigenvector must be nonzero.
i. False. Let
20
02
Aªº
=«»
¬¼
then and are eigenvectors of A for the eigenvalue 2,
and they are linearly independent.
j. True. This follows from Theorem 4 in Section 5.2
k. False. Let A be the 33× matrix in Example 3 of Section 5.3. Then A is similar to a diagonal
matrix D. The eigenvectors of D are the columns of
3
,I but the eigenvectors of A are entirely
different.
l. False. Let 20
.
03
ªº
=«»
¬¼
A Then
1
1
0
ª
º
=
«
»
¬
¼
e and
2
0
1
ª
º
=
«
»
¬
¼
e are eigenvectors of A, but
12
+ee
is not.
(Actually, it can be shown that if two eigenvectors of A correspond to distinct eigenvalues, then
their sum cannot be an eigenvector.)
m. False. All the diagonal entries of an upper triangular matrix are the eigenvalues of the matrix
(Theorem 1 in Section 5.1). A diagonal entry may be zero.
n. True. Matrices A and
T
A
have the same characteristic polynomial, because
det( ) det( ) det( ),−λ =−λ =−λ
TT
AI AI AI
by the determinant transpose property.
o. False. Counterexample: Let A be the 55× identity matrix.
p. True. For example, let A be the matrix that rotates vectors through 2π/ radians about the origin.
Then Ax is not a multiple of x when x is nonzero.
q. False. If A is a diagonal matrix with 0 on the diagonal, then the columns of A are not linearly
independent.
r. True. If
1
A
λ
=xx
and
2
,=xxA
λ
then
12
λλ
=xx
and
12
().=x0
λλ
If ,x0 then
1
λ
must equal
2
.
λ
s. False. Let A be a singular matrix that is diagonalizable. (For instance, let A be a diagonal matrix
with 0 on the diagonal.) Then, by Theorem 8 in Section 5.4, the transformation Axx6 is
represented by a diagonal matrix relative to a coordinate system determined by eigenvectors of
A.
1
1
0
ª
º
=
«
»
¬
¼
e2
0
1
ª
º
=
«
»
¬
¼
e
Chapter 5 Supplementary Exercises 351
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
t. True. By definition of matrix multiplication,
11
22
[][ ]
nn
AAI A A A A== =ee e e e e""
If =ee
jjj
Ad for
1,=, ,
j…n
then A is a diagonal matrix with diagonal entries
1
.,,
n
d…d
u. True. If
1
,
=BPDP
where D is a diagonal matrix, and if
1
,
=AQBQ then
11 1
()()(),AQPDPQ QPDQP
−− −
==
which shows that A is diagonalizable.
v. True. Since B is invertible, AB is similar to
1
(),
BABB which equals BA.
w. False. Having n linearly independent eigenvectors makes an nn× matrix diagonalizable (by the
Diagonalization Theorem 5 in Section 5.3), but not necessarily invertible. One of the eigenvalues
of the matrix could be zero.
x. True. If A is diagonalizable, then by the Diagonalization Theorem, A has n linearly independent
eigenvectors
1
,,vv
n
in .R
n
By the Basis Theorem,
1
{},,vv
n
spans .R
n
This means that
each vector in
n
R
can be written as a linear combination of
1
.,,vv
n
2. Suppose Bx0
and =λxxAB for some λ. Then
() .=λ
xxAB
Left-multiply each side by B and
obtain
() () ().=λ=λ
xxxBA B B B
This equation says that Bx is an eigenvector of BA, because
.x0B
3. a. Suppose ,=λxxA with .x0
Then
(5 ) 5 5 (5 ) .==−λ =−λ
xx xxx xIA A
The eigenvalue
is
5.−λ
b.
222
(5 3 ) 5 3 ( ) 5 3( ) (5 3 ) .+=+=−λ +λ=−λ+λxx x x x x x xIAA AAA The eigenvalue is
2
53 .−λ+λ
4. Assume that A
λ
=xx
for some nonzero vector x. The desired statement is true for 1,=m by the
assumption about
λ
. Suppose that for some 1,k the statement holds when .=mk
That is, suppose
that .=xx
kk
A
λ
Then
1
()()
kkk
AAAA
λ
+
==xx x
by the induction hypothesis. Continuing,
11
,
++
==xxx
kkk
AA
λλ
because x is an eigenvector of A corresponding to A. Since x is nonzero, this
equation shows that
1
k
λ
+
is an eigenvalue of
1
,
+k
A
with corresponding eigenvector x. Thus the
desired statement is true when 1.=+mk By the principle of induction, the statement is true for each
positive integer m.
5. Suppose ,=λxxA with .x0
Then
2
01 2
2
01 2
2
01 2
() ( )
()
=++ ++
=+ + ++
=+λ+λ++λ=λ
xx
xx x x
xx x x x
n
n
n
n
n
n
pA cI cA cA … cA
ccAcAcA
cc c c p
So
()λ
p
is an eigenvalue of
().
pA
6. a. If
1
,
=APDP
then
1
,
=
kk
APDP
and
21 121
21
53 5 3
(5 3 )
−−−
=+= +
=+
BIAA PIP PDP PDP
PI D DP
Since D is diagonal, so is
2
53 .+IDD
Thus B is similar to a diagonal matrix.
352 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b.
121 1
01 2
21
01 2
1
()
()
()
−− −
=+ + ++
=++++
=
"
"
n
n
n
n
pA cI cPDP cPDP cPDP
PcI cD cD cD P
Pp D P
This shows that
()
pA
is diagonalizable, because
()
pD
is a linear combination of diagonal
matrices and hence is diagonal. In fact, because D is diagonal, it is easy to see that
(2) 0
() 0(7)
p
pD p
ªº
=«»
¬¼
7. If
1
,
=APDP
then
1
() () ,
=pA PpDP as shown in Exercise 6. If the
(),
jj
entry in D is λ, then the
(),
jj
entry in
k
D
is
,λ
k
and so the
(),
jj
entry in
()
pD
is
().λ
p
If p is the characteristic
polynomial of A, then
() 0λ=
p
for each diagonal entry of D, because these entries in D are the
eigenvalues of A. Thus
()
pD
is the zero matrix. Thus
1
() 0 0.
=⋅⋅ =pA P P
8. a. If
λ
is an eigenvalue of an nn× diagonalizable matrix A, then
1
APDP
=
for an invertible
matrix P and an nn× diagonal matrix D whose diagonal entries are the eigenvalues of A. If the
multiplicity of
λ
is n, then
λ
must appear in every diagonal entry of D. That is, .=DI
λ
In this
case,
111
() .
−−
====APIP PIP PP I
λλ λλ
b. Since the matrix 31
03
Aªº
=«»
¬¼
is triangular, its eigenvalues are on the diagonal. Thus 3 is an
eigenvalue with multiplicity 2. If the
22×
matrix A were diagonalizable, then A would be 3I, by
part (a). This is not the case, so A is not diagonalizable.
9. If
IA
were not invertible, then the equation
().=
x0IA
would have a nontrivial solution x. Then
A=xx0
and 1,=xxA which shows that A would have 1 as an eigenvalue. This cannot happen if
all the eigenvalues are less than 1 in magnitude. So
IA
must be invertible.
10. To show that
k
A
tends to the zero matrix, it suffices to show that each column of
k
A
can be made as
close to the zero vector as desired by taking k sufficiently large. The jth column of A is ,e
j
A where
j
e is the jth column of the identity matrix. Since A is diagonalizable, there is a basis for
n
consisting of eigenvectors
1
,,,vv
n
corresponding to eigenvalues
1
.λ,,λ
n
So there exist scalars
1
,,,
n
c…c such that
11
(an eigenvector decomposition of )=++ev v e
jnn j
cc
Then, for 12 ,=,,k…
11 1
() ( ) ()=λ++ λ∗ev v"
kk k
jnnn
Ac c
If the eigenvalues are all less than 1 in absolute value, then their kth powers all tend to zero. So
()
shows that
k
j
Ae
tends to the zero vector, as desired.
11. a. Take x in H. Then
c=xu
for some scalar c. So
() ( ) ( )(),===λ=λ
xu u u uAAc cA c c
which
shows that
Ax
is in H.
Chapter 5 Supplementary Exercises 353
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b. Let x be a nonzero vector in K. Since K is one-dimensional, K must be the set of all scalar
multiples of x. If K is invariant under A, then
Ax
is in K and hence
Ax
is a multiple of x. Thus x
is an eigenvector of A.
12. Let U and V be echelon forms of A and B, obtained with r and s row interchanges, respectively, and
no scaling. Then det ( 1) det
r
AU= and det ( 1) det
s
BV=
Using first the row operations that reduce A to U, we can reduce G to a matrix of the form
.
0
ªº
=«»
¬¼
UY
G
B Then, using the row operations that reduce B to V, we can further reduce
G
to
.
0
ªº
′′ =«»
¬¼
UY
G
V There will be rs+ row interchanges, and so
det det ( 1) det
00
+
ªº ªº
==
«» «»
¬¼ ¬¼
rs
AX UY
G
BV
Since 0
ª
º
«
»
¬
¼
UY
V is upper triangular, its determinant
equals the product of the diagonal entries,
and since U and V are upper triangular, this product also equals (det U ) (det V ). Thus
det ( 1) (det )(det ) (det )(det )
+
==
rs
GUVAB
For any scalar
λ
, the matrix
−λGI
has the same partitioned form as G, with
−λAI
and
−λBI
as
its diagonal blocks. (Here
I
represents various identity matrices of appropriate sizes.) Hence the
result about det G shows that
det( ) det( ) det( )−λ =−λ ⋅ −λ
GI AI BI
13. By Exercise 12, the eigenvalues of A are the eigenvalues of the matrix
[
]
3 together with the
eigenvalues of 52
.
43
ªº
«»
¬¼
The only eigenvalue of
[
]
3 is 3, while the eigenvalues of 52
43
ªº
«»
¬¼
are
1 and 7. Thus the eigenvalues of A are 1, 3, and 7.
14. By Exercise 12, the eigenvalues of A are the eigenvalues of the matrix 15
24
ª
º
«
»
¬
¼ together with the
eigenvalues of 74
.
31
−−
ªº
«»
¬¼
The eigenvalues of 15
24
ª
º
«
»
¬
¼ are
1
and 6, while the eigenvalues of
74
31
−−
ªº
«»
¬¼
are
5
and
1.
Thus the eigenvalues of A are
15,,
and 6, and the eigenvalue
1
has
multiplicity 2.
15. Replace a by
a
λ
in the determinant formula from Exercise 16 in Chapter 3 Supplementary
Exercises.
1
det( ) ( ) [ ( 1) ]
−λ =−−λ −λ+
n
AI ab a n b
This determinant is zero only if
0−−λ=ab
or
(1) 0.−λ+=
anb
Thus
λ
is an eigenvalue of A if
and only if
λ=ab
or
(1).λ=+
an
From the formula for
det( )−λ
AI
above, the algebraic
multiplicity is
1n
for
ab
and 1 for
(1).+
an b
16. The
33×
matrix has eigenvalues
12
and
1(2)(2),+
that is,
1
and 5. The eigenvalues of the
55×
matrix are
73
and
7(4)(3),+
that is 4 and 19.
354 CHAPTER 5 Eigenvalues and Eigenvectors
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
17. Note that
2
11 22 12 21 11 22 11 22 12 21
det( ) ( )( ) ( ) ( )−λ =−λ −λ − =λ− +λ+AI a a aa a a aa aa
2
(tr ) det ,=λ− λ+AA
and use the quadratic formula to solve the characteristic equation:
2
tr (tr ) 4det
2
±
=AA A
λ
The eigenvalues are both real if and only if the discriminant is nonnegative, that is,
2
(tr ) 4det 0.−≥AA
This inequality simplifies to
2
(tr ) 4detAA and
2
det .
2
§·
¨¸
©¹
trA A
18. The eigenvalues of A are 1 and .6. Use this to factor A and
.
k
A
1310 23
1
2206 21
4
131 0 23
1
22 21
4
06
23
13
1
22
42(6) (6)
26(6) 33(6)
1
444(6) 62(6)
23
1 as
46
4
ªº
«»
«»
«»
«»
¬¼
ªº
«»
«»
«»
¬¼
ªº
«»
«»
«»
«»
¬¼
−−
ªºªºªº
=
«»«»«»
.−−
¬¼¬¼¬¼
−−
ªº ªº
=
«» «»
−−
.
¬¼ ¬¼
−−
ªº
=«»
−⋅..
¬¼
+. +.
=..
−−
ªº
→→
«»
¬¼
k
k
k
kk
kk
kk
A
A
k
19.
2
01
det( ) 6 5 ( )
65
ªº
=;−λ =−λ+λ=λ
«»
¬¼
pp
CCIp
20.
010
001;
24 26 9
ªº
«»
=«»
«»
¬¼
p
C
23
det( ) 24 26 9 ( )−λ =−λ+λ−λ=λ
p
CI p
21. If p is a polynomial of order 2, then a calculation such as in Exercise 19 shows that the characteristic
polynomial of
p
C
is
2
() (1) (),λ=−λpp
so the result is true for
2.=n
Suppose the result is true for
nk=
for some
2,k
and consider a polynomial p of degree
1.+k
Then expanding
det( )−λ
p
CI
by cofactors down the first column, the determinant of
−λ
p
CI
equals
1
0
12
10
()det (1)
01
+
−λ
ªº
«»
«»
−λ +
«»
«»
−− −λ
«»
¬¼
"
##
"
k
k
a
aa a
Chapter 5 Supplementary Exercises 355
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
The
kk×
matrix shown is
,−λ
q
CI
where
1
12
() .
=+ ++ +
"
kk
k
qt a at at t
By the induction
assumption, the determinant of
−λ
q
CI
is
(1) ().−λ
kq
Thus
1
0
11
01
1
det( ) ( 1) ( )( 1) ( )
(1) [ ( )]
(1) ()
+
+
+
−λ =+−λ − λ
=+λ++λ+λ
=−λ
"
kk
p
kkk
k
k
CI a q
aa a
p
So the formula holds for
1nk=+
when it holds for
.=nk
By the principle of induction, the formula
for
det( )−λ
p
CI
is true for all
2.n
22. a.
012
010
001
p
C
aaa
ªº
«»
«»
«»
«»
«»
«»
¬¼
=
−−
b. Since
λ
is a zero of p,
23
01 2
0+λ+λ+λ=aa a
and
23
01 2
.−−λλ=λaa a
Thus
22
2
201 2
1
p
C
aa a
ªºªº
ªº
«»«»
«»
«»«»
«»
«»«»
«»
«»«»
«»
«»«»
«» 3
«»«»
«»
¬¼
¬¼¬¼
λλ
λ=λ=λ
−−λλ
λλ
That is,
22
(1 ) (1 ),,λ,λ=,λ,λ
p
C
λ
which shows that
2
(1 ),λ,λ
is an eigenvector of
p
C
corresponding to the eigenvalue
λ
.
23. From Exercise 22, the columns of the Vandermonde matrix V are eigenvectors of
,
p
C
corresponding
to the eigenvalues
123
λ,λ,λ
(the roots of the polynomial p). Since these eigenvalues are distinct, the
eigenvectors from a linearly independent set, by Theorem 2 in Section 5.1. Thus V has linearly
independent columns and hence is invertible, by the Invertible Matrix Theorem. Finally, since the
columns of V are eigenvectors of
,
p
C
the Diagonalization Theorem (Theorem 5 in Section 5.3)
shows that
1
p
VCV
is diagonal.
24. [M] The MATLAB command roots(p) requires as input a row vector p whose entries are the
coefficients of a polynomial, with the highest order coefficient listed first. MATLAB constructs a
companion matrix
p
C
whose characteristic polynomial is p, so the roots of p are the eigenvalues of
.
p
C
The numerical values of the eigenvalues (roots) are found by the same QR algorithm used by
the command eig(A).
25. [M] The MATLAB command [P D] = e i g ( A ) produces a matrix P, whose condition number is
8
16 10 ,.×
and a diagonal matrix D, whose entries are almost 2, 2, 1. However, the exact eigenvalues
of A are 2, 2, 1, and A is not diagonalizable.
26. [M] This matrix may cause the same sort of trouble as the matrix in Exercise 25. A matrix program
that computes eigenvalues by an interative process may indicate that A has four distinct eigenvalues,
all close to zero. However, the only eigenvalue is 0, with multiplicity 4, because
4
0.=A
Copyright © 2012 Pea
r
6.1 SOLUTIONS
Notes:
The first half of this section is
c
concepts of orthogonality and orthogon
a
an important general fact, but is needed
in Section 7.4. The optional material on
1. Since 1
2
ªº
=«»
¬¼
u and 4,
6
ªº
=«»
¬¼
v
=
uu
2. Since
3
1
5
ªº
«»
=
«»
«»
¬¼
w
and
6
2,
3
ªº
«»
=
«»
«»
¬¼
x
w
and
51
.
35 7
==
xw
ww
3. Since
3
1,
5
ªº
«»
=
«»
«»
¬¼
w
22
3(1)=+ww
4. Since 1,
2
ªº
=«»
¬¼
u
22
(1) 2=+
=
uu
5. Since 1
2
ªº
=«»
¬¼
u and 4,
6
ªº
=«»
¬¼
v u v
48/13
2.
612/13
13
ªº ª º
§·
==
¨¸ «» « »
©¹ ¬¼ ¬ ¼
uv v
vv
r
son Education, Inc. Publishing as Addison-Wesley.
c
omputational and is easily learned. The second half
c
a
l complements, which are essential for later work.
T
only for Supplementary Exercise 13 at the end of th
e
angles is not used later. Exercises 27–31 concern fact
s
22
(1) 2 5=+=
, v u = 4(–1) + 6(2) = 8, and
8
5
=
vu
uu
222
3(1)(5)35=++=w
, x w = 6(3) + (–2)(–1) +
2
(5) 35+=
, and
3/35
11/35 .
1/7
ªº
«»
=
«»
«»
¬¼
w
ww
5
=
and 1/5
1.
2/5
ªº
=«»
¬¼
u
uu
= (–1)(4) + 2(6) = 8,
22
4652,=+=vv and
357
c
oncerns the
T
heorem 3 is
e
chapter and
s
used later.
8
.
5
3(–5) = 5,
358 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
6. Since
6
2
3
ªº
«»
=
«»
«»
¬¼
x
and
3
1,
5
ªº
«»
=
«»
«»
¬¼
w
x w = 6(3) + (–2)(–1) + 3(–5) = 5,
222
6(2)349,=++=xx
and
630/49
5210/49.
49 315/49
ªºª º
§· «»« »
==
¨¸ «»« »
©¹ «»« »
¬¼¬ ¼
xw x
xx
7. Since
3
1,
5
ªº
«»
=
«»
«»
¬¼
w
222
|| || 3 ( 1) ( 5) 35.==++=www
8. Since
6
2,
3
ªº
«»
=
«»
«»
¬¼
x
222
|| || 6 ( 2) 3 49 7.==++= =xxx
9. A unit vector in the direction of the given vector is
22
30 30 3/ 5
11
40 40 4 / 5
50
(30) 40
−−
ª
ºªºªº
==
«
»«»«»
¬
¼¬¼¬¼
+
10. A unit vector in the direction of the given vector is
22 2
6/ 61
66
11
444/61
61
(6) 4 (3) 33
361
ª
º
−−
ªº ªº
«
»
«» «»
==
«
»
«» «»
«
»
++«» «»
−−
¬¼ ¬¼
«
»
¬
¼
11. A unit vector in the direction of the given vector is
222
7/ 69
7/4 7/4
11
1/2 1/2 2/ 69
69/16
(7 / 4) (1/ 2) 1 11
4/ 69
ª
º
ªº ªº
«
»
«» «»
==
«
»
«» «»
«
»
++
«» «»
¬¼ ¬¼
«
»
¬
¼
12. A unit vector in the direction of the given vector is
22
8/3 8/3 4/5
11
223/5
100 / 9
(8 / 3) 2
ªº ªºªº
==
«» «»«»
¬¼ ¬¼¬¼
+
13. Since
10
3
ªº
=«»
¬¼
x and
1,
5
ªº
=«»
¬¼
y
22 2
|| || [10 ( 1)] [ 3 ( 5)] 125=−− +−−=xy
and
dist ( , ) 125 5 5.==xy
6.1 Solutions 359
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
14. Since
0
5
2
ªº
«»
=
«»
«»
¬¼
u
and
4
1,
8
ªº
«»
=
«»
«»
¬¼
z
22 22
|| || [0 ( 4)] [ 5 ( 1)] [2 8] 68=−− +−−+=uz
and
dist ( , ) 68 2 17.==uz
15. Since a b = 8(–2) + (–5)( –3) = –1 0, a and b are not orthogonal.
16. Since u v = 12(2) + (3)( –3) + (–5)(3) = 0, u and v are orthogonal.
17. Since u v = 3(–4) + 2(1) + (–5)( –2) + 0(6) = 0, u and v are orthogonal.
18. Since y z = (–3)(1) + 7(–8) + 4(15) + 0(–7) = 1 0, y and z are not orthogonal.
19. a. True. See the definition of || v ||.
b . True. See Theorem 1(c).
c . True. See the discussion of Figure 5.
d . False. Counterexample:
11
.
00
ªº
«»
¬¼
e . True. See the box following Example 6.
20. a. True. See Example 1 and Theorem 1(a).
b . False. The absolute value sign is missing. See the box before Example 2.
c . True. See the defintion of orthogonal complement.
d . True. See the Pythagorean Theorem.
e . True. See Theorem 3.
21. Theorem 1(b):
() () ( )
TTTTT
+=+ = + = + =+uvw uvw u vwuwvwuwvw
The second and third equalities used Theorems 3(b) and 2(c), respectively, from Section 2.1.
Theorem 1(c):
() () ( ) ( )
TT
cccc===uv uv uv uv
The second equality used Theorems 3(c) and 2(d), respectively, from Section 2.1.
22. Since u u is the sum of the squares of the entries in u, u u 0. The sum of squares of numbers is
zero if and only if all the numbers are themselves zero.
23. One computes that u v = 2(–7) + (–5)( –4) + (–1)6 = 0,
2222
|| || 2 ( 5) ( 1) 30,==++=uuu
2222
|| || ( 7) ( 4) 6 101,==++=vvv
and
2
|| || ( ) ( )+=++=uv uv uv
222
(2 ( 7)) ( 5 ( 4)) ( 1 6) 131.+++++=
24. One computes that
222
|| || ( ) ( ) 2 || || 2 || ||+=++=++=++uv uv uv uu uvvv u uv v
and
360 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
222
|| || ( ) ( ) 2 || || 2 || ||=−⋅− =⋅− ⋅+=−⋅+uv uv uv uu uvvv u uv v
so
222 22 222
|| || || || || || 2 || || || || 2 || || 2|| || 2 || ||++=+++−⋅+= +uv uv u uv v u uv v u v
25. When
,
a
b
ªº
=«»
¬¼
v the set H of all vectors x
y
ª
º
«
»
¬
¼
that are orthogonal to v is the subspace of vectors whose
entries satisfy ax + by = 0. If a 0, then x = – (b/a)y with y a free variable, and H is a line through
the origin. A natural choice for a basis for H in this case is
.
b
a
½
ª
º
°
°
®
¾
«
»
°
°
¬
¼
¯¿
If a = 0 and b 0, then by = 0.
Since b 0, y = 0 and x is a free variable. The subspace H is again a line through the origin. A
natural choice for a basis for H in this case is
1,
0
½
ª
º
°
°
®
¾
«
»
°
°
¬
¼
¯¿
but
b
a
½
ª
º
°
°
®
¾
«
»
°
°
¬
¼
¯¿
is still a basis for H since a = 0
and b 0. If a = 0 and b = 0, then H =
2
since the equation 0x + 0y = 0 places no restrictions on x or
y.
26. Theorem 2 in Chapter 4 may be used to show that W is a subspace of
3
, because W is the null space
of the 1 × 3 matrix
.
T
u
Geometrically, W is a plane through the origin.
27. If y is orthogonal to u and v, then y u = y v = 0, and hence by a property of the inner product,
y (u + v) = y u + y v = 0 + 0 = 0. Thus y is orthogonal to u + v.
28. An arbitrary w in Span{u, v} has the form
12
cc=+wuv
. If y is orthogonal to u and v, then
u y = v y = 0. By Theorem 1(b) and 1(c),
12 1 2
()()()000cc c c=+ =+=+=wy u v y uy vy
29. A typical vector in W has the form
11
.
pp
cc=++wv v
If x is orthogonal to each
,
j
v
then by
Theorems 1(b) and 1(c),
11 1 1
()()()0
pp p p
ccc c=++ =+…+ =wx v v x v x v x
So x is orthogonal to each w in W.
30. a. If z is in
,
W
u is in W, and c is any scalar, then (cz) u = c(z u) = c
0 = 0. Since u is any
element of W, c
z is in
.W
b. Let
1
z
and
2
z
be in
.W
Then for any u in W,
12 1 2
() 000.+=+=+=zzuzuzu
Thus
12
+zz
is in
.W
c. Since 0 is orthogonal to every vector, 0 is in
.W
Thus
W
is a subspace.
31. Suppose that x is in W and
.W
Since x is in
,
W
x is orthogonal to every vector in W, including x
itself. So x x = 0, which happens only when x = 0.
6.2 Solutions 361
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
32. [M]
a. One computes that
1234
|| || || || || || || || 1====aa aa
and that
0
ij
=aa
for i j.
b. Answers will vary, but it should be that || Au || = || u || and || Av || = || v ||.
c. Answers will again vary, but the cosines should be equal.
d. A conjecture is that multiplying by A does not change the lengths of vectors or the angles
between vectors.
33. [M] Answers to the calculations will vary, but will demonstrate that the mapping
()T
§·
=¨¸
©¹
xv
xx v
vv
6
(for v 0) is a linear transformation. To confirm this, let x and y be in
n
, and
let c be any scalar. Then
() ()()
()T+⋅⋅+
§·§ ·
+= =
¨¸¨ ¸
⋅⋅
©¹© ¹
xyv xv yv
xy v v
vv vv () ()TT
⋅⋅
§·§·
=+=+
¨¸¨¸
⋅⋅
©¹©¹
xv yv
vvxy
vv vv
and
() ( )
() ()
cc
Tc c cT
⋅⋅ ⋅
§·§·§·
====
¨¸¨¸¨¸
⋅⋅
©¹©¹©¹
xv xv xv
xvvvx
vv vv vv
34. [M] One finds that
51
1050 1/3
14
,01104/3
10
0001 1/3
01
03
NR
ªº
«»
ªº
«»
«»
«»
==
«»
«»
«»
¬¼
«»
«»
¬¼
The row-column rule for computing RN produces the 3 × 2 zero matrix, which shows that the rows of
R are orthogonal to the columns of N. This is expected by Theorem 3 since each row of R is in Row
A and each column of N is in Nul A.
6.2 SOLUTIONS
Notes:
The nonsquare matrices in Theorems 6 and 7 are needed for the QR factorization in Section 6.4. It
is important to emphasize that the term orthogonal matrix applies only to certain square matrices. The
subsection on orthogonal projections not only sets the stage for the general case in Section 6.3, it also
provides what is needed for the orthogonal diagonalization exercises in Section 7.1, because none of the
eigenspaces there have dimension greater than 2. For this reason, the Gram-Schmidt process (Section 6.4)
is not really needed in Chapter 7. Exercises 13 and 14 are good preparation for Section 6.3.
1. Since
13
4420,
37
ªºªº
«»«»
⋅− =
«»«»
«»«»
−−
¬¼¬¼
the set is not orthogonal.
362 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
2. Since
10 1 5 0 5
21 2 2 1 20,
12 1 1 2 1
−−
ªºªºªºªºªºªº
«»«»«»«»«»«»
−⋅ =−⋅=⋅− =
«»«»«»«»«»«»
«»«»«»«»«»«»
¬¼¬¼¬¼¬¼¬¼¬¼
the set is orthogonal.
3. Since
63
31300,
91
ªºªº
«»«»
−⋅ =−≠
«»«»
«»«»
¬¼¬¼
the set is not orthogonal.
4. Since
20 2 4 0 4
50 5 2 0 20,
30 3 6 0 6
ªºªºªºªºªºªº
«»«»«»«»«»«»
−⋅ =−⋅=⋅− =
«»«»«»«»«»«»
«»«»«»«»«»«»
−−
¬¼¬¼¬¼¬¼¬¼¬¼
the set is orthogonal.
5. Since
31 33 13
23 28 38
0,
13 17 37
34 30 40
−−
ªºªºªºªºªºªº
«»«»«»«»«»«»
−−
«»«»«»«»«»«»
===
«»«»«»«»«»«»
−−
«»«»«»«»«»«»
«»«»«»«»«»«»
¬¼¬¼¬¼¬¼¬¼¬¼
the set is orthogonal.
6. Since
43
1332 0,
35
81
ªºªº
«»«»
«»«»
=−≠
«»«»
«»«»
«»«»
¬¼¬¼
the set is not orthogonal.
7. Since
12
12 12 0,==uu
12
{, }uu
is an orthogonal set. Since the vectors are non-zero,
1
u
and
2
u
are linearly independent by Theorem 4. Two such vectors in
2
automatically form a basis for
2
. So
12
{, }uu
is an orthogonal basis for
2
. By Theorem 5,
12
1212
11 2 2
1
32
⋅⋅
=+ =+
⋅⋅
xu xu
xu uuu
uu u u
8. Since
12 660,=+=uu
12
{, }uu
is an orthogonal set. Since the vectors are non-zero,
1
u
and
2
u
are linearly independent by Theorem 4. Two such vectors in
2
automatically form a basis for
2
. So
12
{, }uu
is an orthogonal basis for
2
. By Theorem 5,
12
1212
11 2 2
33
24
⋅⋅
=+ =+
⋅⋅
xu xu
xu uuu
uu u u
9. Since
12 13 23
0,===uu uu u u
,
123
{, }uu u
is an orthogonal set. Since the vectors are non-zero,
1,u
2,u
and
3
u
are linearly independent by Theorem 4. Three such vectors in
3
automatically form
a basis for
3
. So
123
{, , }uu u
is an orthogonal basis for
3
. By Theorem 5,
3
12
123123
11 2 2 3 3
532
22
⋅⋅
=+ + =+
⋅⋅ ⋅
xuxu xu
xu u uuuu
uu u u u u
6.2 Solutions 363
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
10. Since
12 13 23
0,===uu uu u u
,
123
{, }uuu
is an orthogonal set. Since the vectors are non-zero,
1,u
2,u
and
3
u
are linearly independent by Theorem 4. Three such vectors in
3
automatically form
a basis for
3
. So
123
{, , }uu u
is an orthogonal basis for
3
. By Theorem 5,
3
12
123123
11 2 2 3 3
411
333
⋅⋅
=+ + =++
⋅⋅ ⋅
xuxu xu
xu u uuuu
uu u u uu
11. Let
1
7
ªº
=«»
¬¼
y and
4.
2
ª
º
=
«
»
¬
¼
u The orthogonal projection of y onto the line through u and the origin is
the orthogonal projection of y onto u, and this vector is
2
1
ˆ1
2
ªº
===
«»
¬¼
yu
yuu
uu
12. Let
1
1
ªº
=«»
¬¼
y and
1.
3
ª
º
=
«
»
¬
¼
u The orthogonal projection of y onto the line through u and the origin is
the orthogonal projection of y onto u, and this vector is
2/5
2
ˆ6/5
5
ªº
===«»
¬¼
yu
yuu
uu
13. The orthogonal projection of y onto u is
4/5
13
ˆ7/5
65
ªº
===«»
¬¼
yu
yuu
uu
The component of y orthogonal to u is
ˆ14/5
ªº
=«»
8/5
¬¼
yy
Thus
ˆˆ
4/5 14/5
ª
ºª º
=+()= +
«
»« »
7/5 8/5
¬
¼¬ ¼
yy yy .
14. The orthogonal projection of y onto u is
14 / 5
2
ˆ2/5
5
ªº
===
«»
¬¼
yu
yuu
uu
The component of y orthogonal to u is
ˆ
4/5
ªº
=«»
28/5
¬¼
yy
Thus
ˆˆ .
14/5 4/5
ª
ºª º
=+()= +
«
»« »
2/5 28/5
¬
¼¬ ¼
yy yy
15. The distance from y to the line through u and the origin is ||y –
ˆ
y
||. One computes that
383/5
3
ˆ164/5
10
ªº ªº ª º
===
«» «» « »
¬¼ ¬¼ ¬ ¼
yu
yyy u
uu
364 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
so
ˆ
||
||
=9/25+16/25=1yy
is the desired distance.
16. The distance from y to the line through u and the origin is ||y –
ˆ
y
||. One computes that
316
ˆ3
923
−−
ªº ªºªº
===
«» «»«»
¬¼ ¬¼¬¼
yu
yyy u
uu
so
ˆ
||
||
=36+9=35yy
is the desired distance.
17. Let
1/3
1/3 ,
1/3
ªº
«»
=«»
«»
¬¼
u
1/2
0.
1/2
ªº
«»
=«»
«»
¬¼
v
Since u v = 0, {u, v} is an orthogonal set. However,
2
|| || 1/ 3==uuu
and
2
|| || 1/ 2,==vvv
so {u, v} is not an orthonormal set. The vectors u and v may be normalized to
form the orthonormal set
3/3 2/2
,3/3,0
|| || || || 3/3 2/2
½
ªº
ªº
°
°
«»
«»
½
°
°
=«»
®¾® ¾
«»
¯¿
«»
°
°
«»
«»
¬¼
°
°
¬¼
¯¿
uv
uv
18. Let
0
1,
0
ªº
«»
=«»
«»
¬¼
u
0
1.
0
ªº
«»
=
«»
«»
¬¼
v
Since u v = –1 0, {u, v} is not an orthogonal set.
19. Let
.6 ,
.8
ªº
=«»
¬¼
u
.8 .
.6
ªº
=«»
¬¼
v
Since u v = 0, {u, v} is an orthogonal set. Also,
2
|| || 1==uuu
and
2
|| || 1,==vvv
so {u, v} is an orthonormal set.
20. Let
2/3
1/3 ,
2/3
ªº
«»
=«»
«»
¬¼
u
1/3
2/3 .
0
ª
º
«
»
=
«
»
«
»
¬
¼
v
Since u v = 0, {u, v} is an orthogonal set. However,
2
|| || 1==uuu
and
2
|| || 5/9,==vvv
so {u, v} is not an orthonormal set. The vectors u and v may be normalized
to form the orthonormal set
1/ 5
2/3
,1/3,2/5
|| || || || 2/3 0
½
ªº
ªº
°
°
«»
½
°
°
«»
=«»
®¾® ¾
«»
¯¿ «»
°
°
«»
¬¼
«»
°
°
¬¼
¯¿
uv
uv
21. Let
1/ 10
3/ 20 ,
3/ 20
ªº
«»
=
«»
«»
«»
¬¼
u
3/ 10
1/ 20 ,
1/ 20
ª
º
«
»
=
«
»
«
»
«
»
¬
¼
v and
0
1/ 2 .
1/ 2
ª
º
«
»
=
«
»
«
»
¬
¼
w Since u v = u w = v w = 0, {u, v, w} is an
orthogonal set. Also,
2
|| || 1,==uuu
2
|| || 1,==vvv
and
2
|| || 1,==www
so {u, v, w} is an
orthonormal set.
6.2 Solutions 365
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
22. Let
1/ 18
4/ 18 ,
1/ 18
ªº
«»
=
«»
«»
«»
¬¼
u
1/ 2
0,
1/ 2
ª
º
«
»
=
«
»
«
»
¬
¼
v and
2/3
1/3 .
2/3
ª
º
«
»
=
«
»
«
»
¬
¼
w Since u v = u w = v w = 0, {u, v, w} is an
orthogonal set. Also,
2
|| || 1,==uuu
2
|| || 1,==vvv
and
2
|| || 1,==www
so {u, v, w} is an
orthonormal set.
23. a. True. For example, the vectors u and y in Example 3 are linearly independent but not orthogonal.
b . True. The formulas for the weights are given in Theorem 5.
c . False. See the paragraph following Example 5.
d . False. The matrix must also be square. See the paragraph before Example 7.
e . False. See Example 4. The distance is ||y
ˆ
y||.
24. a. True. But every orthogonal set of nonzero vectors is linearly independent. See Theorem 4.
b . False. To be orthonormal, the vectors is S must be unit vectors as well as being orthogonal to each
other.
c . True. See Theorem 7(a).
d . True. See the paragraph before Example 3.
e . True. See the paragraph before Example 7.
25. To prove part (b), note that
()( )()( )
TTTT
UU UU UU====xy xyx yxyxy
because
T
UU I=
. If y = x in part (b), (Ux) (Ux) = x x, which implies part (a). Part (c) of the
Theorem follows immediately fom part (b).
26. A set of n nonzero orthogonal vectors must be linearly independent by Theorem 4, so if such a set
spans W it is a basis for W. Thus W is an n-dimensional subspace of
n
, and
W=
n
.
27. If U has orthonormal columns, then
T
UU I=
by Theorem 6. If U is also a square matrix, then the
equation
T
UU I=
implies that U is invertible by the Invertible Matrix Theorem.
28. If U is an n × n orthogonal matrix, then
1T
IUU UU
==
. Since U is the transpose of
,
T
U Theorem
6 applied to
T
U
says that
T
U
has orthogonal columns. In particular, the columns of
T
U
are linearly
independent and hence form a basis for
n
by the Invertible Matrix Theorem. That is, the rows of U
form a basis (an orthonormal basis) for
n
.
29. Since U and V are orthogonal, each is invertible. By Theorem 6 in Section 2.2, UV is invertible and
111
() (),
TT T
UV V U V U UV
−−
===
where the final equality holds by Theorem 3 in Section 2.1. Thus
UV is an orthogonal matrix.
30. If U is an orthogonal matrix, its columns are orthonormal. Interchanging the columns does not
change their orthonormality, so the new matrix – say, V – still has orthonormal columns. By
Theorem 6,
.
T
VV I=
Since V is square,
1T
VV
=
by the Invertible Matrix Theorem.
366 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
31. Suppose that
ˆ.
=
yu
yu
uu
Replacing u by cu with c 0 gives
2
22
() ( ) ( ) ˆ
() ()
()() () ()
cc c
cc
cc cc
⋅⋅
====
⋅⋅
⋅⋅
yu yu yu yu
uuuuy
uu uu
uu uu
So
ˆ
y does not depend on the choice of a nonzero u in the line L used in the formula.
32. If
12
0=
vv , then by Theorem 1(c) in Section 6.1,
11 2 2 1 1 2 2 12 1 2 12
()( )[( )] ( ) 00cc cc cc cc====
vv vv vv
33. Let L = Span{u}, where u is nonzero, and let
()T
=
xu
xu
uu
. For any vectors x and y in
n
and any
scalars c and d, the properties of the inner product (Theorem 1) show that
()
()
cd
Tc d
+
+=
xyu
xy u
uu
cd
+
=
xu yu
u
uu
cd
⋅⋅
=+
⋅⋅
xu yu
uu
uu uu
() ()cT dT=+xy
Thus T is a linear transformation. Another approach is to view T as the composition of the following
three linear mappings: x6a = x v, a 6b = a / v v, and b 6bv.
34. Let L = Span{u}, where u is nonzero, and let
() refl 2proj
LL
T==
xy yy
. By Exercise 33, the
mapping
projL
yy
6
is linear. Thus for any vectors y and z in
n
and any scalars c and d,
()2proj()()
L
Tc d c d c d+= ++
yz yz yz
2( proj proj )
LL
cd cd=+−−
yzyz
2proj 2 proj
LL
ccdd=+
yy z z
(2 proj ) (2 proj )
LL
cd=+
yy zz
() ()cT d T=+yz
Thus T is a linear transformation.
35. [M] One can compute that
4
100 .
T
AA I=
Since the off-diagonal entries in
T
AA are zero, the columns
of A are orthogonal.
6.3 Solutions 367
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
36. [M]
a. One computes that
4
,
T
UU I=
while
82 0 20 8 6 20 24 0
04224 020 62032
20 24 58 20 0 32 0 6
802082242060
1
620 02418 0 820
100
20 6 32 20 0 58 0 24
24 20 0 6 8 0 18 20
032 6 020242042
T
UU
ªº
«»
−−
«»
«»
«»
§·
«»
=¨¸
«»
−−
©¹
«»
«»
«»
−−
«»
−−
«»
¬¼
The matrices
T
UU
and
T
UU
are of different sizes and look nothing like each other.
b. Answers will vary. The vector
T
UU=py
is in Col
U because
()
T
UU=py
. Since the columns of
U are simply scaled versions of the columns of A, Col
U = Col
A. Thus each p is in Col A.
c. One computes that
T
U=
z0.
d. From (c), z is orthogonal to each column of A. By Exercise 29 in Section 6.1, z must be
orthogonal to every vector in Col A; that is, z is in
(Col ) .A
6.3 SOLUTIONS
Notes:
Example 1 seems to help students understand Theorem 8. Theorem 8 is needed for the Gram-
Schmidt process (but only for a subspace that itself has an orthogonal basis). Theorems 8 and 9 are
needed for the discussions of least squares in Sections 6.5 and 6.6. Theorem 10 is used with the QR
factorization to provide a good numerical method for solving least squares problems, in Section 6.5.
Exercises 19 and 20 lead naturally into consideration of the Gram-Schmidt process.
1. The vector in
4
Span{ }
u is
4
444
44
10
6
72 22
36
2
ªº
«»
«»
===
«»
«»
«»
¬¼
xu uuu
uu
Since
4
11 2 2 3 3 4
44
,cc c
=+ + +
xu
xu u u u
uu
the vector
4
4
44
10 10 0
862
224
022
ª
ºª ºª º
«
»« »« »
−−
«
»« »« »
==
«
»« »« »
«
»« »« »
«
»« »« »
¬
¼¬ ¼¬ ¼
xu
xu
uu
is in
123
Span{ , , }.
uu u
368 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
2. The vector in
1
Span{ }
u is
1
111
11
2
4
14 22
7
2
ªº
«»
«»
===
«»
«»
«»
¬¼
vu uuu
uu
Since
1
1223344
11
,ccc
=+++
vu
xuuuu
uu
the vector
1
1
11
42 2
54 1
32 5
32 1
ª
ºªºª º
«
»«»« »
«
»«»« »
==
«
»«»« »
−−
«
»«»« »
«
»«»« »
¬
¼¬¼¬ ¼
vu
vu
uu
is in
234
Span{ , , }.
uuu
3. Since
12 110 0,=++ =
uu
12
{, }
uu is an orthogonal set. The orthogonal projection of y onto
12
Span{ , }
uu is
12
1212
11 2 2
111
35 3 5
ˆ114
22 2 2
000
−−
ª
ºªºªº
⋅⋅
«
»«»«»
=+ =+=+=
«
»«»«»
⋅⋅
«
»«»«»
¬
¼¬¼¬¼
yu yu
yu uuu
uu u u
4. Since
12 12 12 0 0,=++=
uu
12
{, }
uu is an orthogonal set. The orthogonal projection of y onto
12
Span{ , }
uu is
12
1212
11 2 2
346
30 15 6 3
ˆ433
25 25 5 5
000
ª
ºªºªº
⋅⋅
«
»«»«»
=+ ===
«
»«»«»
⋅⋅
«
»«»«»
¬
¼¬¼¬¼
yu yu
yu uuu
uu u u
5. Since
12
314 0,=+=
uu
12
{, }
uu is an orthogonal set. The orthogonal projection of y onto
12
Span{ , }
uu is
12
1212
11 2 2
311
7151 5
ˆ112
14 6 2 2
226
ª
ºªºªº
⋅⋅
«
»«»«»
=+ ==−− =
«
»«»«»
⋅⋅
«
»«»«»
¬
¼¬¼¬¼
yu yu
yu uuu
uu u u
6. Since
12
0110,=+=
uu
12
{, }
uu is an orthogonal set. The orthogonal projection of y onto
12
Span{ , }
uu is
12
1212
11 2 2
406
27 5 3 5
ˆ114
18 2 2 2
111
ª
ºªºªº
⋅⋅
«
»«»«»
=+ =+=−−+=
«
»«»«»
⋅⋅
«
»«»«»
¬
¼¬¼¬¼
yu yu
yu uuu
uu u u
7. Since
12
5380,=+=
uu
12
{, }
uu is an orthogonal set. By the Orthogonal Decomposition
Theorem,
6.3 Solutions 369
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
12
1212
11 2 2
10 / 3 7 / 3
2
ˆˆ
02/3, 7/3
38/3 7/3
ª
ºªº
⋅⋅
«
»«»
=+ =+= ==
«
»«»
⋅⋅
«
»«»
¬
¼¬¼
yu yu
yu uuu zyy
uu u u
and y =
ˆ
y+ z, where
ˆ
y is in W and z is in
.W
8. Since
12 132 0,=+=
uu
12
{, }
uu is an orthogonal set. By the Orthogonal Decomposition
Theorem,
12
1212
11 2 2
3/2 5/2
1
ˆˆ
27/2, 1/2
212
ª
ºªº
⋅⋅
«
»«»
=+ =+= ==
«
»«»
⋅⋅
«
»«»
¬
¼¬¼
yu yu
yu uuu zyy
uu u u
and y =
ˆ
y+ z, where
ˆ
y is in W and z is in
.W
9. Since
12 13 23
0,===
uu uu u u
123
{, , }
uu u is an orthogonal set. By the Orthogonal
Decomposition Theorem,
312
123123
11 2 2 3 3
22
41
22
ˆˆ
2,
03
33
01
ª
ºªº
«
»«»
⋅⋅
«
»«»
=+ + =+===
«
»«»
⋅⋅ ⋅
«
»«»
«
»«»
¬
¼¬¼
yu
yu yu
yu u uuuuzyy
uu uu uu
and y =
ˆ
y+ z, where
ˆ
y is in W and z is in
.W
10. Since
12 13 23
0,===
uu uu u u
123
{, , }
uu u is an orthogonal set. By the Orthogonal
Decomposition Theorem,
312
123123
11 2 2 3 3
52
22
114 5
ˆˆ
,
32
333
60
ª
ºªº
«
»«»
⋅⋅
«
»«»
=+ + =+===
«
»«»
⋅⋅ ⋅
«
»«»
«
»«»
¬
¼¬¼
yu
yu yu
yu u uuuuzyy
uu u u uu
and y =
ˆ
y+ z, where
ˆ
y is in W and z is in
.W
11. Note that
1
v and
2
v are orthogonal. The Best Approximation Theorem says that
ˆ
y, which is the
orthogonal projection of y onto
12
Span{ , },W=
vv is the closest point to y in W. This vector is
12
1212
11 2 2
3
1
13
ˆ1
22
1
ª
º
«
»
⋅⋅
«
»
=+ =+=
«
»
⋅⋅
«
»
«
»
¬
¼
yv yv
yv vvv
vv v v
12. Note that
1
v and
2
v are orthogonal. The Best Approximation Theorem says that
ˆ
y, which is the
orthogonal projection of y onto
12
Span{ , },W=
vv is the closest point to y in W. This vector is
370 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
12
1212
11 2 2
1
5
ˆ31 3
9
ª
º
«
»
⋅⋅
«
»
=+ =+=
«
»
⋅⋅
«
»
«
»
¬
¼
yv yv
yv vvv
vv v v
13. Note that
1
v and
2
v are orthogonal. By the Best Approximation Theorem, the closest point in
12
Span{ , }
vv to z is
12
1212
11 2 2
1
3
27
ˆ2
33
3
ª
º
«
»
⋅⋅
«
»
=+ ==
«
»
⋅⋅
«
»
«
»
¬
¼
zv zv
zv vvv
vv v v
14. Note that
1
v and
2
v are orthogonal. By the Best Approximation Theorem, the closest point in
12
Span{ , }
vv to z is
12
1212
11 2 2
1
0
1
ˆ01/2
2
3/2
ª
º
«
»
⋅⋅
«
»
=+ =+=
«
»
⋅⋅
«
»
«
»
¬
¼
zv zv
zv vvv
vv v v
15. The distance from the point y in
3
to a subspace W is defined as the distance from y to the closest
point in W. Since the closest point in W to y is
ˆproj ,
W
=
yy
the desired distance is || y
ˆ
y||. One
computes that
32
ˆˆ
90,
16
ªº ªº
«» «»
=,=
«» «»
«» «»
¬¼ ¬¼
yyy
and
ˆ
|| 40 10.|| = = 2yy
16. The distance from the point y in
4
to a subspace W is defined as the distance from y to the closest
point in W. Since the closest point in W to y is
ˆproj ,
W
=
yy
the desired distance is || y –
ˆ
y||. One
computes that ˆˆ
,
14
ªº ªº
«» «»
54
«» «»
=,=
«» «»
34
«» «»
94
«» «»
¬¼ ¬¼
yyy
and || y
ˆ
y|| = 8.
17. a.
8/9 2/9 2/9
10
,2/95/94/9
01 2/9 4/9 5/9
TT
UU UU
ªº
ªº «»
==
«» «»
¬¼ «»
¬¼
b . Since
2
,
T
UU I=
the columns of U form an orthonormal basis for W, and by Theorem 10
8/9 2/9 2/9 4 2
proj 2 / 9 5 / 9 4 / 9 8 4 .
2/9 4/9 5/9 1 5
T
WUU
ª
ºª º ª º
«
»« » « »
===
«
»« » « »
«
»« » « »
¬
¼¬ ¼ ¬ ¼
yy
6.3 Solutions 371
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
18. a.
[]
1/10 3/10
11, 3/10 9/10
TT
UU UU
ªº
== =
«»
¬¼
b . Since
1,
T
UU
=
1
{}
u forms an orthonormal basis for W, and by Theorem 10
1/10 3/10 7 2
proj .
3/10 9/10 9 6
T
W
UU
−−
ª
ºª º ª º
== =
«
»« » « »
¬
¼¬ ¼ ¬ ¼
yy
19. By the Orthogonal Decomposition Theorem,
3
u is the sum of a vector in
12
Span{ , }W=
uu and a
vector v orthogonal to W. This exercise asks for the vector v:
33312
000
11
proj 0 2 / 5 2 / 5
315 14/51/5
W
ª
ºª ºª º
§·
«
»« »« »
==−− +=−− =
¨¸
«
»« »« »
©¹
«
»« »« »
¬
¼¬ ¼¬ ¼
vu u u u u
Any multiple of the vector v will also be in
.W
20. By the Orthogonal Decomposition Theorem,
4
u is the sum of a vector in
12
Span{ , }W=
uu and a
vector v orthogonal to W. This exercise asks for the vector v:
44412
000
11
proj 1 1/ 5 4 / 5
630 02/52/5
W
ª
ºª ºª º
§·
«
»« »« »
==−− ==
¨¸
«
»« »« »
©¹
«
»« »« »
¬
¼¬ ¼¬ ¼
vu u u u u
Any multiple of the vector v will also be in
.W
21. a. True. See the calculations for
2
z in Example 1 or the box after Example 6 in Section 6.1.
b . True. See the Orthogonal Decomposition Theorem.
c . False. See the last paragraph in the proof of Theorem 8, or see the second paragraph after the
statement of Theorem 9.
d . True. See the box before the Best Approximation Theorem.
e . True. Theorem 10 applies to the column space W of U because the columns of U are linearly
independent and hence form a basis for W.
22. a. True. See the proof of the Orthogonal Decomposition Theorem.
b . True. See the subsection “A Geometric Interpretation of the Orthogonal Projection.”
c . True. The orthgonal decomposition in Theorem 8 is unique.
d . False. The Best Approximation Theorem says that the best approximation to y is
proj .
W
y
e . False. This statement is only true if x is in the column space of U. If n > p, then the column space
of U will not be all of
n
, so the statement cannot be true for all x in
n
.
23. By the Orthogonal Decomposition Theorem, each x in
n
can be written uniquely as x = p + u, with
p in Row A and u in
(Row ) .A
By Theorem 3 in Section 6.1,
(Row ) Nul ,AA
=
so u is in Nul
A.
Next, suppose Ax = b is consistent. Let x be a solution and write x = p + u as above. Then
Ap = A(x u) = Ax Au = b 0 = b, so the equation Ax = b has at least one solution p in Row A.
Finally, suppose that p and
1
p are both in Row
A and both satisfy Ax = b. Then
1
pp
is in
Nul (Row ) ,AA
=
since
11
()AAA===
pp p p bb0
. The equations
1()=+1
pp pp and
372 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
p = p + 0 both then decompose p as the sum of a vector in Row
A and a vector in
(Row )A
. By
the uniqueness of the orthogonal decomposition (Theorem 8),
1,=
pp
and p is unique.
24. a. By hypothesis, the vectors
1
w, ,
p
w
are pairwise orthogonal, and the vectors
1
v, ,
q
v
are
pairwise orthogonal. Since
i
w is in W for any i and
j
v
is in
W
for any j,
0
ij
=wv
for any i
and j. Thus
11
{,, ,,,}
pq
……wwvv
forms an orthogonal set.
b. For any y in
n
, write y =
ˆ
y+ z as in the Orthogonal Decomposition Theorem, with
ˆ
y in
W and z in
W
. Then there exist scalars
1
,,
p
cc
and
1
,,
q
dd
such that
ˆ
=+=
yyz
11 11pp qq
ccdd+…+ + +…+wwvv
. Thus the set
11
{,, ,,,}
pq
……wwvv
spans
n
.
c . The set
11
{,, ,,,}
pq
……wwvv
is linearly independent by (a) and spans
n
by (b), and is thus a
basis for
n
. Hence
dim dim dimWWpq
+=+=
n
.
25. [M] Since
4
T
UU I=
, U has orthonormal columns by Theorem 6 in Section 6.2. The closest point to
y in Col U is the orthogonal projection
ˆ
y of y onto Col U. From Theorem 10,
ˆUU Τ
1.2
ªº
«»
.4
«»
«»
1.2
«»
1.2
«»
==
«»
.4
«»
1.2
«»
«»
.4
«»
.4
«»
¬¼
yy
26. [M] The distance from b to Col
U is || b
ˆ
b||, where
ˆ.UU Τ
=
bb One computes that
ˆˆˆ
UU Τ
.2 .8
ªº ªº
«» «»
.92 .08
«» «»
«» «»
.44 .56
«» «»
10
112
«» «»
==,=,||||=
«» «»
.2 .8 5
«» «»
.44 .56
«» «»
«» «»
.6 1.6
«» «»
.92 .08
«» «»
¬¼ ¬¼
bb bb bb
which is 2.1166 to four decimal places.
6.4 Solutions 373
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
6.4 SOLUTIONS
Notes:
The QR factorization encapsulates the essential outcome of the Gram-Schmidt process, just as the
LU factorization describes the result of a row reduction process. For practical use of linear algebra, the
factorizations are more important than the algorithms that produce them. In fact, the Gram-Schmidt
process is not the appropriate way to compute the QR factorization. For that reason, one should consider
deemphasizing the hand calculation of the Gram-Schmidt process, even though it provides easy exam
questions.
The Gram-Schmidt process is used in Sections 6.7 and 6.8, in connection with various sets of
orthogonal polynomials. The process is mentioned in Sections 7.1 and 7.4, but the one-dimensional
projection constructed in Section 6.2 will suffice. The QR factorization is used in an optional subsection
of Section 6.5, and it is needed in Supplementary Exercise 7 of Chapter 7 to produce the Cholesky
factorization of a positive definite matrix.
1. Set
11
=
vx
and compute that
21
22 12 1
11
1
35.
3
ª
º
«
»
===
«
»
«
»
¬
¼
xv
vx vx v
vv Thus an orthogonal basis for W
is
31
0, 5 .
13
½
ªºªº
°°
«»«»
®¾
«»«»
°°
«»«»
−−
¬¼¬¼
¯¿
2. Set
11
=
vx
and compute that
21
22 12 1
11
5
14.
28
ª
º
«
»
===
«
»
«
»
¬
¼
xv
vx vx v
vv Thus an orthogonal basis for W
is
05
4, 4 .
28
½
ªºª º
°°
«»« »
®¾
«»« »
°°
«»« »
¬¼¬ ¼
¯¿
3. Set
11
=
vx
and compute that
21
22 12 1
11
3
13/2 .
23/2
ª
º
«
»
===
«
»
«
»
¬
¼
xv
vx vx v
vv Thus an orthogonal basis for
W is
23
5,3/2 .
13/2
½
ªºª º
°°
«»« »
®¾
«»« »
°°
«»« »
¬¼¬ ¼
¯¿
4. Set
11
=
vx
and compute that
21
22 12 1
11
3
(2) 6.
3
ª
º
«
»
==−− =
«
»
«
»
¬
¼
xv
vx vx v
vv Thus an orthogonal basis for
W is
33
4,6 .
53
½
ªºªº
°°
«»«»
®¾
«»«»
°°
«»«»
¬¼¬¼
¯¿
374 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
5. Set
11
=
vx
and compute that
21
22 12 1
11
5
1
2.
4
1
ª
º
«
»
«
»
===
«
»
«
»
«
»
¬
¼
xv
vx vx v
vv Thus an orthogonal basis for W
is
15
41
,.
04
11
½
ªºªº
°°
«»«»
°°
«»«»
®¾
«»«»
°°
«»«»
°°
«»«»
¬¼¬¼
¯¿
6. Set
11
=
vx
and compute that
21
22 12 1
11
4
6
(3) .
3
0
ª
º
«
»
«
»
==−− =
«
»
«
»
«
»
¬
¼
xv
vx vx v
vv Thus an orthogonal basis for
W is
34
16
,.
23
10
½
ªºªº
°°
«»«»
°°
«»«»
®¾
«»«»
°°
«»«»
°°
«»«»
¬¼¬¼
¯¿
7. Since
1
|| || 30=v
and
2
|| || 27/ 2 3 6 / 2,==v
an orthonormal basis for W is
12
12
2/ 30 2/ 6
,5/30,1/6.
|| || || || 1/ 30 1/ 6
½
ªºªº
°°
«»«»
½
°°
=
«»«»
®¾® ¾
«»«»
¯¿
°°
«»«»
°°
¬¼¬¼
¯¿
vv
vv
8. Since
1
|| || 50=v
and
2
|| || 54 3 6,==v
an orthonormal basis for W is
12
12
3/ 50 1/ 6
,4/50,2/6.
|| || || || 5/ 50 1/ 6
½
ªºªº
°°
«»«»
½
°°
=
«»«»
®¾® ¾
«»«»
¯¿
°°
«»«»
°°
¬¼¬¼
¯¿
vv
vv
9. Call the columns of the matrix
1
x,
2
x, and
3
x and perform the Gram-Schmidt process on these
vectors:
11
=
vx
21
22 12 1
11
1
3
(2) 3
1
ª
º
«
»
«
»
==−− =
«
»
«
»
«
»
¬
¼
xv
vx vx v
vv
6.4 Solutions 375
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
31 32
33 1 23 1 2
11 2 2
3
1
31
1
22
3
ª
º
«
»
⋅⋅ §·
«
»
=−− =−−=
¨¸
«
»
⋅⋅ ©¹
«
»
«
»
¬
¼
xv xv
vx v vx v v
vv v v
Thus an orthogonal basis for W is
313
13 1
,, .
13 1
313
½
ªºªºªº
°°
«»«»«»
°°
«»«»«»
®¾
«»«»«»
°°
«»«»«»
°°
«»«»«»
¬¼¬¼¬¼
¯¿
10. Call the columns of the matrix
1
x,
2
x, and
3
x and perform the Gram-Schmidt process on these
vectors:
11
=
vx
21
22 12 1
11
3
1
(3) 1
1
ª
º
«
»
«
»
==−− =
«
»
«
»
«
»
¬
¼
xv
vx vx v
vv
31 32
33 1 23 1 2
11 2 2
1
1
15
3
22
1
ª
º
«
»
⋅⋅
«
»
=−− =−− =
«
»
⋅⋅
«
»
«
»
¬
¼
xv xv
vx v vx v v
vv v v
Thus an orthogonal basis for W is
131
311
,, .
113
111
½−−
ªºªºªº
°°
«»«»«»
°°
«»«»«»
®¾
«»«»«»
°°
«»«»«»
°°
−−
«»«»«»
¬¼¬¼¬¼
¯¿
11. Call the columns of the matrix
1
x,
2
x, and
3
x and perform the Gram-Schmidt process on these
vectors:
11
=
vx
21
22 12 1
11
3
0
(1) 3
3
3
ª
º
«
»
«
»
«
»
==−− =
«
»
«
»
«
»
¬
¼
xv
vx vx v
vv
31 3 2
33 1 23 1 2
11 2 2
2
0
1
42
32
2
ª
º
«
»
«
»
⋅⋅ §·
«
»
=−− =−−=
¨¸
⋅⋅ ©¹
«
»
«
»
«
»
¬
¼
xv xv
vx v vx v v
vv v v
376 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Thus an orthogonal basis for W is
132
100
,, .
132
132
132
½
ªºªºªº
°°
«»«»«»
°°
«»«»«»
°°
«»«»«»
®¾
«»«»«»
°°
«»«»«»
°°
«»«»«»
°°
¬¼¬¼¬¼
¯¿
12. Call the columns of the matrix
1
x,
2
x, and
3
x and perform the Gram-Schmidt process on these
vectors:
11
=
vx
21
22 12 1
11
1
1
42
1
1
ªº
«»
«»
«»
===
«»
«»
«»
¬¼
xv
vx vx v
vv
31 32
33 1 23 1 2
11 2 2
3
3
73 0
22 3
3
ª
º
«
»
«
»
⋅⋅
«
»
=−− =−− =
⋅⋅
«
»
«
»
«
»
¬
¼
xv xv
vx v vx v v
vv v v
Thus an orthogonal basis for W is
113
113
,, .
020
113
113
½
ªºªºªº
°°
«»«»«»
°°
«»«»«»
°°
«»«»«»
®¾
«»«»«»
°°
«»«»«»
°°
«»«»«»
°°
¬¼¬¼¬¼
¯¿
13. Since A and Q are given,
59
5/6 1/6 3/6 1/6 1 7 6 12
1/6 5/ 6 1/ 6 3/ 6 3 5 0 6
15
T
RQA
ªº
«»
ªºªº
«»
== =
«»«»
«»
−−
¬¼¬¼
«»
«»
¬¼
14. Since A and Q are given,
23
2/7 5/7 2/7 4/7 5 7 7 7
5/7 2/7 4/7 2/7 2 2 0 7
46
T
RQA
ªº
«»
ªºªº
«»
== =
«»«»
«»
−−
¬¼¬¼
«»
«»
¬¼
15. The columns of Q will be normalized versions of the vectors
1
v,
2
v, and
3
v found in Exercise 11.
Thus
6.4 Solutions 377
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
1/ 5 1/2 1/ 2
1/ 5 0 0 5545
,062
1/ 5 1/2 1/ 2
004
1/ 5 1/2 1/ 2
1/ 5 1/2 1/ 2
T
QRQA
ªº
«»
ª
º
«»
«
»
«»
===
«
»
«»
«
»
«»
¬
¼
«»
«»
¬¼
16. The columns of Q will be normalized versions of the vectors
1
v,
2
v, and
3
v found in Exercise 12.
Thus
1/2 1/(2 2) 1/ 2
287
1/2 1/(2 2) 1/ 2
,02232
01/20
006
1/2 1/(2 2) 1/ 2
1/2 1/(2 2) 1/ 2
T
QRQA
ªº
«»
ª
º
«»
«
»
«»
===
«
»
«»
«
»
«»
¬
¼
«»
«»
¬¼
17. a. False. Scaling was used in Example 2, but the scale factor was nonzero.
b . True. See (1) in the statement of Theorem 11.
c . True. See the solution of Example 4.
18. a. False. The three orthogonal vectors must be nonzero to be a basis for a three-dimensional
subspace. (This was the case in Step 3 of the solution of Example 2.)
b . True. If x is not in a subspace w, then x cannot equal
projW
x, because
projW
x is in W. This idea
was used for
1k+
v in the proof of Theorem 11.
c . True. See Theorem 12.
19. Suppose that x satisfies Rx = 0; then Q Rx = Q0 = 0, and Ax = 0. Since the columns of A are linearly
independent, x must be 0. This fact, in turn, shows that the columns of R are linearly indepedent.
Since R is square, it is invertible by the Invertible Matrix Theorem.
20. If y is in Col
A, then y = Ax for some x. Then y = QRx = Q(Rx), which shows that y is a linear
combination of the columns of Q using the entries in Rx as weights. Conversly, suppose that y = Qx
for some x. Since R is invertible, the equation A = QR implies that
1
QAR
=
. So
11
(),AR A R
−−
==yx x
which shows that y is in Col A.
21. Denote the columns of Q by
1
{, , }
n
qq
. Note that n m, because A is m × n and has linearly
independent columns. The columns of Q can be extended to an orthonormal basis for
m
as follows.
Let
1
f be the first vector in the standard basis for
m
that is not in
1
Span{ , , },
nn
W=…
qq
let
11 1
proj
n
W
=uf f
, and let
11 1
/
|| ||.
n+=
quu
Then
11
{, , , }
nn+
qqq
is an orthonormal basis for
11 1
Span{ , , , }.
nnn
W++
=…
qqq Next let
2
f be the first vector in the standard basis for
m
that is
not in
1n
W+
, let
1
22 2
proj ,
n
W
+
=uf f
and let
22 2
/
|| || .
n+=
quu
Then
112
{, , , , }
nn n++
qqqq
is an
orthogonal basis for
21 12
Span{ , , , , }.
nnnn
W+++
=…
qqqq This process will continue until m n vectors
have been added to the original n vectors, and
11
{, , , , , }
nn m+
……
qqq q
is an orthonormal basis for
m
.
378 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Let
[]
01nm
Q
+
=…qq
and
[]
10
QQQ=
. Then, using partitioned matrix multiplication,
1.
R
QQRA
O
ªº
==
«»
¬¼
22. We may assume that
1
{, , }
p
uu
is an orthonormal basis for W, by normalizing the vectors in the
original basis given for W, if necessary. Let U be the matrix whose columns are
1
,, .
p
uu
Then, by
Theorem 10 in Section 6.3,
() proj ( )
T
W
TUU==xxx
for x in
n
. Thus T is a matrix transformation
and hence is a linear transformation, as was shown in Section 1.8.
23. Given A = QR, partition
[]
12
AAA=
, where
1
A
has p columns. Partition Q as
[]
12
QQQ=
where
1
Q
has p columns, and partition R as
11 12
22
,
RR
ROR
ª
º
=
«
»
¬
¼
where
11
R
is a p × p matrix. Then
[][ ] [ ]
11 12
12 12 111112222
22
RR
AAA QRQQ QR QR QR
OR
ªº
=== = +
«»
¬¼
Thus
1111
.AQR=
The matrix
1
Q
has orthonormal columns because its columns come from Q. The
matrix
11
R
is square and upper triangular due to its position within the upper triangular matrix R. The
diagonal entries of
11
R
are positive because they are diagonal entries of R. Thus
111
QR
is a QR
factorization of
1
A
.
24. [M]
Call the columns of the matrix
1
x,
2
x,
3
x, and
4
x and perform the Gram-Schmidt process on
these vectors:
11
=
vx
21
22 12 1
11
3
3
(1) 3
0
3
ª
º
«
»
«
»
«
»
==−− =
«
»
«
»
«
»
¬
¼
xv
vx vx v
vv
31 32
33 1 23 1 2
11 2 2
6
0
14
6
23
6
0
ª
º
«
»
«
»
⋅⋅ §· §·
«
»
=−− =−− −− =
¨¸ ¨¸
⋅⋅ ©¹ ©¹
«
»
«
»
«
»
¬
¼
xv xv
vx v vx v v
vv v v
43
41 4 2
44 1 2 34 1 2 3
11 2 2 3 3
11
(1)
22
⋅⋅ §·
=−− =−−
¨¸
⋅⋅ ⋅ ©¹
xv
xv xv
vx v v vx v v v
vv v v v v
0
5
0
0
5
ªº
«»
«»
«»
=«»
«»
«»
¬¼
6.5 Solutions 379
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
Thus an orthogonal basis for W is
10 3 6 0
2305
,,, .
6360
16 0 6 0
2305
½
ªºªºªºªº
°°
«»«»«»«»
°°
«»«»«»«»
°°
«»«»«»«»
−−
®¾
«»«»«»«»
°°
«»«»«»«»
°°
«»«»«»«»
°°
¬¼¬¼¬¼¬¼
¯¿
25. [M] The columns of Q will be normalized versions of the vectors
1
v,
2
v, and
3
v found in Exercise
24. Thus
1/2 1/ 2 1/ 3 0 20 20 10 10
1/10 1/2 0 1/ 2 06 8 6
,
3/10 1/2 1/ 3 0 006333
4/5 0 1/ 3 0 00 052
1/10 1/2 0 1/ 2
T
QRQA
ªº
«»
−−
ª
º
«»
«
»
−−
«»
«
»
===
−−
«»
«
»
«»
«
»
«»
«
»
¬
¼
«»
¬¼
26. [M] In MATLAB, when A has n columns, suitable commands are
Q = A(:,1)/norm(A(:,1))
% The first column of Q
for j=2: n
v=A(:,j) Q*(Q’*A(:,j))
Q(:,j)=v/norm(v)
% Add a new column to Q
end
6.5 SOLUTIONS
Notes:
This is a core section the basic geometric principles in this section provide the foundation
for all the applications in Sections 6.6–6.8. Yet this section need not take a full day. Each example
provides a stopping place. Theorem 13 and Example 1 are all that is needed for Section 6.6. Theorem 15,
however, gives an illustration of why the QR factorization is important. Example 4 is related to Exercise
17 in Section 6.6.
1. To find the normal equations and to find ˆ
x, compute
12
121 6 11
23
233 1122
13
T
AA
ªº
−− −
ªº ªº
«»
==
«» «»
«»
−−
¬¼ ¬¼
«»
¬¼
4
121 4
1
233 11
2
T
A
ªº
−− −
ªºªº
«»
==
«»«»
«»
¬¼¬¼
«»
¬¼
b
380 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
a. The normal equations are
()
TT
AA A=xb
:
1
2
611 4
.
11 22 11
x
x
−−
ªº
ª
ºªº
=
«»
«
»«»
¬
¼¬¼¬¼
b. Compute
1
1
611 4 22114
1
ˆ
x( ) 11 22 11 11 6 11
11
TT
AA A
−− −
ªºªºªºªº
== =
«»«»«»«»
¬¼¬¼¬¼¬¼
b
33 3
1
22 2
11
ªºªº
==
«»«»
¬¼¬¼
2. To find the normal equations and to find
ˆ,
x compute
21
222 128
20
103 810
23
T
AA
ªº
ªº ªº
«»
==
«» «»
«»
¬¼ ¬¼
«»
¬¼
5
222 24
8
103 2
1
T
A
ªº
−−
ªºªº
«»
==
«»«»
«»
¬¼¬¼
«»
¬¼
b
a. The normal equations are
()
TT
AA A=xb
:
1
2
12 8 24 .
810 2
x
x
ªº
ª
ºªº
=
«»
«
»«»
¬
¼¬¼¬¼
b. Compute
1
1
12 8 24 10 8 24
1
ˆ
x( ) 810 2 8 12 2
56
TT
AA A
−−
ªºªºª ºªº
== =
«»«»« »«»
−− −
¬¼¬¼¬ ¼¬¼
b
224 4
1
168 3
56
−−
ªºªº
==
«»«»
¬¼¬¼
3. To find the normal equations and to find ˆ
x, compute
12
110212 66
22350 3 642
25
T
AA
ªº
«»
−−
ªºªº
«»
==
«»«»
«»
¬¼¬¼
«»
«»
¬¼
3
11021 6
22354 6
2
T
A
ªº
«»
ªºªº
«»
==
«»«»
«»
−−
¬¼¬¼
«»
«»
¬¼
b
a. The normal equations are
()
TT
AA A=xb
:
1
2
66 6
642 6
x
x
ªº
ª
ºªº
=
«»
«
»«»
¬
¼¬¼¬¼
b. Compute
6.5 Solutions 381
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
66 6 4266
1
ˆ642 6 6 6 6
216
TT
1
1
ªºªºª ºªº
=(ΑΑ)Α==
«»«»« »«»
−−
¬¼¬¼¬ ¼¬¼
xb
288 4 / 3
1
72 1/ 3
216
ªºª º
==
«»« »
−−
¬¼¬ ¼
4. To find the normal equations and to find ˆ
x, compute
13
111 33
11
311 311
11
T
AA
ªº
ªº ªº
«»
==
«» «»
«»
¬¼ ¬¼
«»
¬¼
5
111 6
1
311 14
0
T
A
ªº
ªºªº
«»
==
«»«»
«»
¬¼¬¼
«»
¬¼
b
a. The normal equations are
()
TT
AA A=xb
:
1
2
33 6
311 14
x
x
ªº
ª
ºªº
=
«»
«
»«»
¬
¼¬¼¬¼
b. Compute
6
ˆ11 14 14
TT
1
1
33 1136
ªºªºª ºªº
1
=(ΑΑ)Α==
«»«»« »«»
333
24
¬¼¬¼¬ ¼¬¼
xb
24 1
1
24 1
24
ªºªº
==
«»«»
¬¼¬¼
5. To find the least squares solutions to Ax = b, compute and row reduce the augmented matrix for the
system
TT
AA A=
xb:
42214 10 1 5
220 4 01 1 3
20210 00 0 0
TT
AA A
ª
ºª º
«
»« »
ªº
=∼−
¬¼
«
»« »
«
»« »
¬
¼¬ ¼
b
so all vectors of the form
51
ˆ31
01
x
3
ªº ªº
«» «»
=+
«» «»
«» «»
¬¼ ¬¼
x
are the least-squares solutions of Ax = b.
6. To find the least squares solutions to Ax = b, compute and row reduce the augmented matrix for the
system
TT
AA A=
xb:
63327 10 1 5
33012 01 1 1
30315 00 0 0
TT
AA A
ª
ºª º
«
»« »
ªº
=∼−
¬¼
«
»« »
«
»« »
¬
¼¬ ¼
b
so all vectors of the form
51
ˆ11
01
x
3
ªº ªº
«» «»
=+
«» «»
«» «»
¬¼ ¬¼
x
are the least-squares solutions of Ax = b.
382 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
7. From Exercise 3,
12
12
,
03
25
A
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
3
1,
4
2
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
b
and
ˆ.
4/3
ª
º
=
«
»
1/3
¬
¼
x
Since
ˆ0
2
A
123231
ªº ªºªºªºªº
«» «»«»«»«»
124/3 1 213
ªº
«» «»«»«»«»
===
«»
«» «»«»«»«»
31/3 4143
¬¼
«» «»«»«»«»
25 2 1 1
«» «»«»«»«»
¬¼ ¬¼¬¼¬¼¬¼
xb
the least squares error is
ˆ
|| .A
||
=20=25xb
8. From Exercise 4,
13
11,
11
A
ª
º
«
»
=
«
»
«
»
¬
¼
5
1,
0
ªº
«»
=«»
«»
¬¼
b
and
ˆ.
1
ª
º
=
«
»
1
¬
¼
x
Since
13 5 4 5 1
1
ˆ11 1 0 1 1
1
11 0 2 0 2
A
ªºªºªºªºªº
ªº
«»«»«»«»«»
=−−==
«»
«»«»«»«»«»
¬¼
«»«»«»«»«»
¬¼¬¼¬¼¬¼¬¼
xb
the least squares error is
ˆ
|| .A
||
=6xb
9. (a) Because the columns
1
a and
2
a of A are orthogonal, the method of Example 4 may be used to
find
ˆ
b, the orthogonal projection of b onto Col A:
12
1212
11 2 2
151
21 2 1
ˆ311
77 7 7
240
ª
ºªºªº
⋅⋅
«
»«»«»
=+ =+=+=
«
»«»«»
⋅⋅
«
»«»«»
¬
¼¬¼¬¼
ba ba
ba aaa
aa a a
(b) The vector ˆ
x contains the weights which must be placed on
1
a and
2
a to produce
ˆ
b. These
weights are easily read from the above equation, so
ˆ.
2/7
ª
º
=
«
»
1/7
¬
¼
x
10. (a) Because the columns
1
a and
2
a of A are orthogonal, the method of Example 4 may be used to
find
ˆ
b, the orthogonal projection of b onto Col A:
12
1212
11 2 2
124
11
ˆ33141
22
124
ª
ºªºªº
⋅⋅
«
»«»«»
=+ =+=+=
«
»«»«»
⋅⋅
«
»«»«»
¬
¼¬¼¬¼
ba ba
ba aaa
aa a a
(b) The vector ˆ
x contains the weights which must be placed on
1
a and
2
a to produce
ˆ
b. These
weights are easily read from the above equation, so
ˆ.
3
ª
º
=
«
»
1/ 2
¬
¼
x
11. (a) Because the columns
1
a,
2
a and
3
a of A are orthogonal, the method of Example 4 may be used
to find
ˆ
b, the orthogonal projection of b onto Col A:
6.5 Solutions 383
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
3
12
123123
11 2 2 3 3
21
ˆ0
33
⋅⋅
=+ + =++
⋅⋅ ⋅
ba
ba ba
ba a aaaa
aa a a a a
40 13
15 11
21
0
6104
33
1151
ªº ª º ª º ª º
«» « » « » « »
«» « » « » « »
=+ + =
«» « » « » « »
«» « » « » « »
−−
«» « » « » « »
¬¼ ¬ ¼ ¬ ¼ ¬ ¼
(b) The vector ˆ
x contains the weights which must be placed on
1
a,
2
a, and
3
a to produce
ˆ
b. These
weights are easily read from the above equation, so
ˆ.
2/3
ª
º
«
»
=0
«
»
«
»
1/ 3
¬
¼
x
12. (a) Because the columns
1
a,
2
a and
3
a of A are orthogonal, the method of Example 4 may be used
to find
ˆ
b, the orthogonal projection of b onto Col A:
3
12
123123
11 2 2 3 3
114 5
ˆ
33 3
⋅⋅ §·
=+ + =++
¨¸
⋅⋅ ⋅ ©¹
ba
ba ba
ba a aaa a
aa a a a a
1105
1012
1145
0113
333
11 16
ªº ªº ªºªº
«» «» «»«»
«» «» «»«»
=+=
«» «» «»«»
«» «» «»«»
−−
«» «» «»«»
¬¼ ¬¼ ¬¼¬¼
(b) The vector ˆ
x contains the weights which must be placed on
1
a,
2
a, and
3
a to produce
ˆ
b. These
weights are easily read from the above equation, so
ˆ.
1/3
ª
º
«
»
= 14/3
«
»
«
»
5/3
¬
¼
x
13. One computes that
11 0
11 , 2 , || || 40
11 6
AA A
ªº ªº
«» «»
=−−==
«» «»
«» «»
¬¼ ¬¼
ubu bu
74
12 , 3 , || || 29
72
AAA
ªº ªº
«» «»
=−−==
«» «»
«» «»
¬¼ ¬¼
vbvbv
Since Av is closer to b than Au is, Au is not the closest point in Col A to b. Thus u cannot be a least-
squares solution of Ax = b.
14. One computes that
32
8, 4,|| || 24
22
AA A
ªº ª º
«» « »
==−−=
«» « »
«» « »
¬¼ ¬ ¼
ubu bu
384 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
72
2, 2,|| || 24
84
AA A
ªº ª º
«» « »
===
«» « »
«» « »
¬¼ ¬ ¼
vbv bv
Since Au and Au are equally close to b, and the orthogonal projection is the unique closest point in
Col A to b, neither Au nor Av can be the closest point in Col A to b. Thus neither u nor v can be a
least-squares solution of Ax = b.
15. The least squares solution satisfies
ˆ.
T
RQ=xb
Since
35
01
R
ª
º
=
«
»
¬
¼
and
7
1
T
Q
ªº
=«»
¬¼
b
, the augmented
matrix for the system may be row reduced to find
35 7 10 4
01 1 01 1
T
RQ
ªºªº
ªº
=
«»«»
¬¼ −−
¬¼¬¼
b
and so
ˆ4
ªº
=«»
1
¬¼
x
is the least squares solution of Ax = b.
16. The least squares solution satisfies
ˆ.
T
RQ=xb
Since
23
05
R
ª
º
=
«
»
¬
¼
and
17 / 2
9/2
T
Q
ªº
=«»
¬¼
b
, the augmented
matrix for the system may be row reduced to find
2317/2 102.9
05 9/2 01 .9
T
RQ
ªºªº
ªº
=
«»«»
¬¼
¬¼¬¼
b
and so
ˆ2.9
ªº
=«»
.9
¬¼
x
is the least squares solution of Ax = b.
17. a. True. See the beginning of the section. The distance from Ax to b is || Ax b ||.
b . True. See the comments about equation (1).
c . False. The inequality points in the wrong direction. See the definition of a least-squares solution.
d . True. See Theorem 13.
e . True. See Theorem 14.
18. a. True. See the paragraph following the definition of a least-squares solution.
b . False. If ˆ
x is the least-squares solution, then Aˆ
x is the point in the column space of A closest to
b. See Figure 1 and the paragraph preceding it.
c . True. See the discussion following equation (1).
d . False. The formula applies only when the columns of A are linearly independent. See Theorem
14.
e . False. See the comments after Example 4.
f. False. See the Numerical Note.
19. a. If Ax = 0, then
.
TT
AA A==
x00 This shows that Nul A is contained in
Nul .
T
AA
b . If
,
T
AA
=x0
then
0.
TT T
AA ==
xxx0 So
()()0,
T
AA=xx
which means that
2
|| || 0,A=x
and
hence Ax = 0. This shows that
Nul
T
AA
is contained in Nul A.
6.6 Solutions 385
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
20. Suppose that Ax = 0. Then
.
TT
AA A==
x00 Since
T
AA is invertible, x must be 0. Hence the
columns of A are linearly independent.
21. a. If A has linearly independent columns, then the equation Ax = 0 has only the trivial solution. By
Exercise 19, the equation
T
AA =
x0 also has only the trivial solution. Since
T
AA is a square
matrix, it must be invertible by the Invertible Matrix Theorem.
b . Since the n linearly independent columns of A belong to
m
, m could not be less than n.
c . The n linearly independent columns of A form a basis for Col A, so the rank of A is n.
22. Note that
T
AA has n columns because A does. Then by the Rank Theorem and Exercise 19,
rank dim Nul dim Nul rank
TT
AA n AA n A A===
23. By Theorem 14,
ˆˆ.
TT
AAAAA
1
==( )bx b
The matrix
1
()
TT
AAA A
is sometimes called the hat-
matrix in statistics.
24. Since in this case
,
T
AA I
=
the normal equations give
ˆ.
T
A=
xb
25. The normal equations are
22 6
,
22 6
x
y
ªºªºªº
=
«»«»«»
¬¼¬¼¬¼
whose solution is the set of all (x, y) such that x + y =
3. The solutions correspond to the points on the line midway between the lines x + y = 2 and x + y =
4.
26. [M] Using .7 as an approximation for
2/2,
02
.353535aa=
and
1.5.a=
Using .707 as an
approximation for
2/2
,
02
.35355339aa=
,
1.5.a=
6.6 SOLUTIONS
Notes:
This section is a valuable reference for any person who works with data that requires statistical
analysis. Many graduate fields require such work. Science students in particular will benefit from
Example 1. The general linear model and the subsequent examples are aimed at students who may take a
multivariate statistics course. That may include more students than one might expect.
1. The design matrix X and the observation vector y are
10 1
11 1
,,
12 2
13 2
X
ªºªº
«»«»
«»«»
==
«»«»
«»«»
«»«»
¬¼¬¼
y
and one can compute
1
46 6 .9
ˆ
,,()
614 11 .4
TTTT
XX X XX X
ªº ªº ªº
====
«» «» «»
¬¼ ¬¼ ¬¼
yy
β
The least-squares line
01
yx
ββ
=+
is thus y = .9 + .4x.
2. The design matrix X and the observation vector y are
386 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
11 0
12 1
,,
14 2
15 3
X
ªºªº
«»«»
«»«»
==
«»«»
«»«»
«»«»
¬¼¬¼
y
and one can compute
1
412 6 .6
ˆ
,,()
12 46 25 .7
TTTT
XX X XX X
ªº ªº ªº
====
«» «» «»
¬¼ ¬¼ ¬¼
yy
β
The least-squares line
01
yx
ββ
=+
is thus y = –.6 + .7x.
3. The design matrix X and the observation vector y are
11 0
10 1
,,
11 2
12 4
X
ªºªº
«»«»
«»«»
==
«»«»
«»«»
«»«»
¬¼¬¼
y
and one can compute
1
42 7 1.1
ˆ
,,()
26 10 1.3
TT TT
XX X XX X
ªº ªº ªº
=== =
«» «» «»
¬¼ ¬¼ ¬¼
yy
β
The least-squares line
01
yx
ββ
=+
is thus y = 1.1 + 1.3x.
4. The design matrix X and the observation vector y are
12 3
13 2
,,
15 1
16 0
X
ªºªº
«»«»
«»«»
==
«»«»
«»«»
«»«»
¬¼¬¼
y
and one can compute
1
416 6 4.3
ˆ
,,()
16 74 17 .7
TTTT
XX X XX X
ªº ªº ªº
====
«» «» «»
¬¼ ¬¼ ¬¼
yy
β
The least-squares line
01
yx
ββ
=+
is thus y = 4.3 – .7x.
5. If two data points have different x-coordinates, then the two columns of the design matrix X cannot
be multiples of each other and hence are linearly independent. By Theorem 14 in Section 6.5, the
normal equations have a unique solution.
6. If the columns of X were linearly dependent, then the same dependence relation would hold for the
vectors in
3
formed from the top three entries in each column. That is, the columns of the matrix
2
11
2
22
2
33
1
1
1
xx
xx
xx
ªº
«»
«»
«»
«»
¬¼
would also be linearly dependent, and so this matrix (called a Vandermonde matrix)
would be noninvertible. Note that the determinant of this matrix is
213132
()()()0xxxxxx−−−
since
1
x
,
2
x
, and
3
x
are distinct. Thus this matrix is invertible, which means that the columns of X
6.6 Solutions 387
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
are in fact linearly independent. By Theorem 14 in Section 6.5, the normal equations have a unique
solution.
7. a. The model that produces the correct least-squares fit is y = X
β
+
where
1
2
1
3
2
4
5
11 1.8
24 2.7
,,,and
39 3.4
416 3.8
525 3.9
X
β
β
ª
º
ªºªº
«
»
«»«»
«
»
«»«»
ªº
«
»
«»«»
====
«»
«
»
«»«»
¬¼
«
»
«»«»
«
»
«»«»
¬¼¬¼
¬
¼
y
β
b. [M] One computes that (to two decimal places)
1.76
ˆ,
.20
ª
º
=
«
»
¬
¼
β
so the desired least-squares equation
is
2
1.76 .20yxx=
.
8. a. The model that produces the correct least-squares fit is y = X
β
+ where
23
11 1 1 1 1
2
23 3
,, ,and
nn
nnn
xx x y
X
y
xxx
β
β
β
ªº
ªº ªº ªº
«»
«» «» «»
====
«»
«» «» «»
«»
«» «» «»
¬¼ ¬¼ ¬¼
¬¼
y### # #
β
b . [M] For the given data,
416 64 1.58
636216 2.08
8 64 512 2.5
10 100 1000 2.8
and
12 144 1728 3.1
14 196 2744 3.4
16 256 4096 3.8
18 324 5832 4.32
X
ªºªº
«»«»
«»«»
«»«»
«»«»
«»«»
==
«»«»
«»«»
«»«»
«»«»
«»«»
«»«»
¬¼¬¼
y
so
1
.5132
ˆ() .03348,
.001016
TT
XX X
ªº
«»
==
«»
«»
¬¼
y
β
and the least-squares curve is
23
.5132 .03348 .001016 .yx x x=+
9. The model that produces the correct least-squares fit is y = X
β
+ where
1
2
3
cos 1 sin 1 7.9
cos 2 sin 2 , 5.4 , , and
cos 3 sin 3 .9
A
XB
ª
º
ªºªº
ªº
«
»«»«»
====
«»
«
»
«»«»
¬¼
«
»«»«»
¬¼¬¼
¬
¼
y

β
388 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
10. a. The model that produces the correct least-squares fit is y = X
β
+ where
.02(10) .07(10)
1
.02(11) .07(11)
2
.02(12) .07(12)
3
.02(14) .07(14) 4
.02(15) .07(15) 5
21.34
20.68
,,,and,
20.05
18.87
18.30
A
B
ee
ee M
Xee M
ee
ee
−−
−−
−−
−−
−−
ªº
ª
º
ªº
«»
«
»
«»
«»
«
»
«»ªº
«»
«
»
«»
====
«»
«»
«
»
«»¬¼
«»
«
»
«»
«»
«
»
«»
¬¼
¬
¼
«»
¬¼
y

β
b. [M] One computes that (to two decimal places) 19.94
ˆ,
10.10
ª
º
=
«
»
¬
¼
β
so the desired least-squares
equation is
.02 .07
19.94 10.10 .
tt
ye e
−−
=+
11. [M] The model that produces the correct least-squares fit is y = X
β
+ where
1
2
3
4
5
13cos.88 3
12.3cos1.1 2.3
,,,and
11.65cos1.42 1.65
11.25cos1.77 1.25
11.01cos2.14 1.01
Xe
β
ª
º
ªºªº
«
»
«»«»
«
»
«»«»
ªº
«
»
«»«»
====
«»
«
»
«»«»
¬¼
«
»
«»«»
«
»
«»«»
¬¼¬¼
¬
¼
y

β
One computes that (to two decimal places) 1.45
ˆ
.811
ª
º
=
«
»
¬
¼
β
. Since e = .811 < 1 the orbit is an ellipse. The
equation r =
β
/ (1 – e cos
ϑ
) produces r = 1.33 when
ϑ
= 4.6.
12. [M] The model that produces the correct least-squares fit is y = X
β
+ , where
1
2
0
3
1
4
5
13.78 91
14.11 98
,,,and
14.39 103
14.73 110
14.88 112
X
β
β
ª
º
ªºªº
«
»
«»«»
«
»
«»«»
ªº
«
»
«»«»
====
«»
«
»
«»«»
¬¼
«
»
«»«»
«
»
«»«»
¬¼¬¼
¬
¼
y
β

One computes that (to two decimal places) 18.56
ˆ
19.24
ª
º
=
«
»
¬
¼
β
, so the desired least-squares equation is
p = 18.56 + 19.24 ln w. When w = 100, p 107 millimeters of mercury.
6.6 Solutions 389
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
13. [M]
a. The model that produces the correct least-squares fit is y = X
β
+ where
23
23
23
23
23
23
23
23
23
23
23
10 0 0
0
11 1 1
8.8
122 2 29.9
133 3 62.0
144 4 104.7
155 5 159.1
166 6
,222.0
294.5
177 7
380.4
188 8
471.1
199 9
571.7
11010 10 686.8
11111 11 809.2
11212 12
X
ªº
«»
ª
«»
«
«»
«
«»
«
«»
«
«»
«
«»
«
«»
«
«»
«»
==
«»
«»
«»
«»
«»
«»
«»
«»
«»
«»
¬
«»
¬¼
y
0
1
2
3
4
0
5
1
6
2
7
3
8
9
10
11
12
,,and
β
β
β
β
ª
º
º
«
»
»
«
»
»
«
»
»
«
»
»
«
»
»
«
»
»
«
»
»ªº
«
»
«»«»
«
»
«»«»
==
«
»
«»«»
«
»
«»«»
¬¼
«
»
«»
«
»
«»
«
»
«»
«
»
«»
«
»
«»
«
»
«»
«
»
«»
«»
«
»
¼
¬
¼
β
One computes that (to four decimal places)
.8558
4.7025
ˆ,
5.5554
.0274
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
β
so the desired least-squares
polynomial is
23
() .8558 4.7025 5.5554 .0274 .yt t t t=++
b. The velocity v(t) is the derivative of the position function y(t), so
2
() 4.7025 11.1108 .0822 ,vt t t=+
and v(4.5) = 53.0 ft/sec.
14. Write the design matrix as
[]
.1x
Since the residual vector = yX
ˆ
β
is orthogonal to Col X,
ˆˆ
0()()
TT
XX==⋅− =11y 1y1
ββ
0
10101
1
ˆˆˆ ˆ ˆ
() ˆ
n
yynx yn xnynnx
β
ββ β β
β
ªº
ªº
=++ =−− =−−
«»
¬¼
«»
¬¼
¦¦ ¦
This equation may be solved for y to find
01
ˆˆ
.yx
ββ
=+
15. From equation (1) on page 369,
1
2
1
1
11
1
T
n
n
xnx
XX xx xx
x
ªº
ª
º
ªº
«»
==
«
»
«»
«»
«
»
¬¼
¬
¼
«»
¬¼
¦
¦
¦
##
1
1
11
T
n
n
yy
Xxy
xx
y
ªºªº
ªº
«»
==
«»
«»
«»
¬¼
¬¼
«»
¬¼
¦
¦
y#
390 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
The equations (7) in the text follow immediately from the normal equations
.
TT
XX X=y
β
16. The determinant of the coefficient matrix of the equations in (7) is
22
().nx x
¦
¦
Using the 2 × 2
formula for the inverse of the coefficient matrix,
2
0
22
1
ˆ1
ˆ()
y
xx
xy
xn
nx x
β
β
ªº ªº
ª
º
=
«» «»
«
»
«»
«»
¬
¼
¬¼
¬¼
¦
¦¦
¦
¦
¦¦
Hence
2
01
22 22
()()()() ()()
ˆˆ
,
() ()
xy xxy nxyxy
nx x nx x
ββ
−−
==
−−
¦¦ ¦¦ ¦ ¦¦
¦¦ ¦¦
Note:
A simple algebraic calculation shows that
10
ˆˆ
() ,yxn
ββ
=
¦
¦
which provides a simple
formula for
0
ˆ
β
once
1
ˆ
β
is known
17. a. The mean of the data in Example 1 is 5.5,x= so the data in mean-deviation form are (–3.5, 1),
(–.5, 2), (1.5, 3), (2.5, 3), and the associated design matrix is
13.5
1.5
.
11.5
12.5
X
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
The columns of X are
orthogonal because the entries in the second column sum to 0.
b. The normal equations are
,
TT
XX X=y
β
or
0
1
40 9
.
021 7.5
β
β
ªº
ª
ºªº
=
«»
«
»«»
¬
¼¬¼¬¼ One computes that
9/4
ˆ,
5/14
ªº
=«»
¬¼
β
so the desired least-squares line is
*
(9/ 4) (5/14) (9/ 4) (5/14)( 5.5).yx x=+ =+
18. Since
1
2
1
1
11
1
T
n
n
xnx
XX xx xx
x
ªº
ª
º
ªº
«»
==
«
»
«»
«»
«
»
¬¼
¬
¼
«»
¬¼
¦
¦
¦
##
T
XX is a diagonal matrix when
0.x=
¦
19. The residual vector = y
ˆ
X
β
is orthogonal to Col X, while
ˆ
y
=X
ˆ
β
is in Col X. Since and
ˆ
y
are
thus orthogonal, apply the Pythagorean Theorem to these vectors to obtain
22222 2
ˆˆ
ˆˆ
SS(T) || || || || || || || || || || || || SS(R) SS(E)XX
ββ
==+= += +=+yy y y
20. Since
ˆ
β
satisfies the normal equations,
ˆ,
TT
XX X=y
β
and
2
ˆˆˆˆˆˆ
|| || ( ) ( )
TTTTT
XXX XXX===y
ββββββ
Since
2
ˆ
|| || SS(R)X=
β
and
2
|| || SS(T)
T
==yy y
, Exercise 19 shows that
ˆ
SS(E) SS(T) SS(R)
TTT
X==yy y
β
6.7 Solutions 391
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
6.7 SOLUTIONS
Notes
: The three types of inner products described here (in Examples 1, 2, and 7) are matched by
examples in Section 6.8. It is possible to spend just one day on selected portions of both sections.
Example 1 matches the weighted least squares in Section 6.8. Examples 2–6 are applied to trend analysis
in Seciton 6.8. This material is aimed at students who have not had much calculus or who intend to take
more than one course in statistics.
For students who have seen some calculus, Example 7 is needed to develop the Fourier series in
Section 6.8. Example 8 is used to motivate the inner product on C[a, b]. The Cauchy-Schwarz and
triangle inequalities are not used here, but they should be part of the training of every mathematics
student.
1. The inner product is
11 2 2
,4 5xy xy xy¢²=+
. Let x = (1, 1), y = (5, –1).
a. Since
2
|| || , 9,xx=¢²=x
|| x || = 3. Since
2
|| || , 105,yy=¢²=y
|| || 105.=y
Finally,
22
|, |15 225.xy¢²==
b. A vector z is orthogonal to y if and only if ¢x, y² = 0, that is,
12
20 5 0,zz=
or
12
4.zz=
Thus
all multiples of 1
4
ª
º
«
»
¬
¼ are orthogonal to y.
2. The inner product is
11 2 2
,4 5.xy xy xy¢²=+
Let x = (3, –2), y = (–2, 1). Compute that
2
|| || , 56,xx=¢²=x
2
|| || , 21,yy=¢²=y
22
|| || || || 56 21 1176==xy
, ¢x, y² = –34, and
2
|, |1156xy¢²=
.
Thus
222
|, | ||||||||,xy¢²xy
as the Cauchy-Schwarz inequality predicts.
3. The inner product is ¢ p, q² = p(–1)q(–1) + p(0)q(0) + p(1)q(1), so
2
4,54 3(1)4(5)5(1)28tt¢+²=+ +=
.
4. The inner product is ¢ p, q² = p(–1)q(–1) + p(0)q(0) + p(1)q(1), so
22
3,32tt t
¢
+²=
(4)(5) 0(3) 2(5) 10.++=
5. The inner product is ¢ p, q² = p(–1)q(–1) + p(0)q(0) + p(1)q(1), so
222
,4,434550pp t t¢²=¢++²=+ +=
and
|| || , 50 5 2ppp=¢²==
. Likewise
22222
,54,5415127qq t t¢²=¢−−²=+ +=
and
|| || , 27 3 3qqq=¢²==
.
6. The inner product is ¢ p, q² = p(–1)q(–1) + p(0)q(0) + p(1)q(1), so
22
,3,3pp t t t t
¢
²=¢−−²=
222
(4) 0 2 20++=
and
|| || , 20 2 5.ppp=¢²==
Likewise
22
,32,32qq t t
¢
²=¢++²=
222
53559++=
and
|| || , 59.qqq=¢²=
7. The orthogonal projection
ˆ
q
of q onto the subspace spanned by p is
,28 5614
ˆ(4 )
,50 2525
qp
qp t t
pp
¢²
==+=+
¢²
8. The orthogonal projection
ˆ
q
of q onto the subspace spanned by p is
392 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
22
,10 31
ˆ(3 )
,20 22
qp
qptttt
pp
¢²
==−−=+
¢²
9. The inner product is ¢p, q² = p(–3)q(–3) + p(–1)q(–1) + p(1)q(1) + p(3)q(3).
a. The orthogonal projection
ˆ
p
2
of
2
p
onto the subspace spanned by
0
p
and
1
p
is
20 21
201
00 11
,,200
ˆ(1) 5
,,420
pp pp
ppp t
pp pp
¢²
¢²
=+=+=
¢²¢²
b. The vector
2
ˆ
qp p t
2
2
==5
will be orthogonal to both
0
p
and
1
p
and
01
{,,}ppq
will be an
orthogonal basis for
012
Span{ , , }.ppp
The vector of values for q at (–3, –1, 1, 3) is (4, –4, –4, 4),
so scaling by 1/4 yields the new vector
2
(1/ 4)( 5).qt=
10. The best approximation to
3
pt=
by vectors in
01
Span{ , , }Wppq=
will be
2
01
01
00 11
,,,01640541
ˆproj (1) ( )
,,,420445
W
pp pp pq t
pp p p q t t
pp pp qq
§·
¢² ¢² ¢²
== + + =++ =
¨¸
¢²¢²¢² ©¹
11. The orthogonal projection of
3
pt=
onto
012
Span{ , , }Wppp=
will be
2
012
01 2
00 11 22
,,,034017
ˆproj (1) ( ) ( 2)
,,,510145
W
pp pp pp
pp p p p tt t
pp pp pp
¢² ¢² ¢²
== + + =++=
¢²¢²¢²
12. Let
012
Span{ , , }.Wppp=
The vector
3
3
proj (17 / 5)
W
pp pt t==
will make
0123
{,, ,}pppp
an orthogonal basis for the subspace
3
of
4
. The vector of values for
3
p
at (–2, –1, 0, 1, 2) is
(–6/5, 12/5, 0, –12/5, 6/5), so scaling by 5/6 yields the new vector
3
3
(5/ 6)( (17 / 5) )ptt==
3
(5/ 6) (17 / 6) .tt
13. Suppose that A is invertible and that ¢u, v² = (Au) (Av) for u and v in
n
. Check each axiom in the
definition on page 376, using the properties of the dot product.
i. ¢u, v² = (Au) (Av) = (Av) (Au) = ¢v, u²
ii. ¢u + v, w² = (A(u + v)) (Aw) = (Au + Av) (Aw) = (Au) (Aw) + (Av) (Aw) = ¢u, w² + ¢v, w²
iii. ¢c
u, v² = (A(
cu)) (Av) = (c(Au)) (Av) = c((Au) (Av)) = c¢u, v²
iv.
2
,()()||||0,AA A¢²==uu u u u
and this quantity is zero if and only if the vector Au is 0. But
Au = 0 if and only u = 0 because A is invertible.
14. Suppose that T is a one-to-one linear transformation from a vector space V into
n
and that ¢u, v² =
T(u) T(v) for u and v in
n
. Check each axiom in the definition on page 376, using the properties of
the dot product and T. The linearity of T is used often in the following.
i. ¢u, v² = T(u) T(v) = T(v) T(u) = ¢v, u²
ii. ¢u + v, w² = T(u + v) T(w) = (T(u) + T(v)) T(w) = T(u) T(w) + T(v) T(w) = ¢u, w² + ¢v, w²
iii. ¢cu, v² = T(cu) T(v) = (cT(u)) T(v) = c(T(u) T(v)) = c¢u, v²
iv.
2
,()()||()||0,TT T¢²==uu u u u
and this quantity is zero if and only if u = 0 since T is a one-
to-one transformation.
6.7 Solutions 393
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
15. Using Axioms 1 and 3, ¢u, c
v² = ¢c
v, u² = c¢v, u² = c¢u, v².
16. Using Axioms 1, 2 and 3,
2
|| || , , ,=¢−−²=¢²¢²uv uvuv uuv vuv
,,,,,2,,=¢²¢²¢²+¢²=¢²¢²+¢²uu uv vu vv uu uv vv
22
|| || 2 , || ||=¢²+uuvv
Since {u, v} is orthonormal,
22
|| || || || 1==uv
and ¢u, v² = 0. So
2
|| || 2.=uv
17. Following the method in Exercise 16,
2
|| || , , ,+=¢++²=¢+²+¢+²uv uvuv uuv vuv
,,,,,2,,=¢²+¢²+¢²+¢²=¢²+¢²+¢²uu uv vu vv uu uv vv
22
|| || 2 , || ||=+¢²+uuvv
Subtracting these results, one finds that
22
|| || || || 4 , ,+−− =¢²uv uv uv
and dividing by 4 gives the
desired identity.
18. In Exercises 16 and 17, it has been shown that
22 2
|| || || || 2 , || ||=¢²+uv u uv v
and
2
|| ||+=uv
22
|| || 2 , || || .+¢²+uuvv
Adding these two results gives
2222
|| || || || 2 || || 2 || || .++=+uv uv u v
19. let
a
b
ªº
=«»
«»
¬¼
u
and
.
b
a
ª
º
=
«
»
«
»
¬
¼
v
Then
2
|| || ,ab=+u
2
|| || ,ab=+v
and
,2.ab¢²=uv
Since a and b are
nonnegative,
|| || ,ab=+u
|| || .ab=+v
Plugging these values into the Cauchy-Schwarz
inequality gives
2|,|||||||||ab a b a b a b=¢²=+ +=+uv u v
Dividing both sides of this equation by 2 gives the desired inequality.
20. The Cauchy-Schwarz inequality may be altered by dividing both sides of the inequality by 2 and then
squaring both sides of the inequality. The result is
222
,||||||||
24
¢²
§·
¨¸
©¹
uv u v
Now let a
b
ªº
=«»
¬¼
u and 1
1
ªº
=«»
¬¼
v. Then
222
|| || ,ab=+u
2
|| || 2=v
, and ¢u, v² = a + b. Plugging these
values into the inequality above yields the desired inequality.
21. The inner product is
1
0
,()().fg ftgtdt¢²=
³
Let
2
() 1 3 ,ft t=
3
() .gt t t=
Then
11
23 53
00
,(13)()34 0fg t t t dt t t tdt¢²=−−=+=
³³
22. The inner product is
1
0
,()().fg ftgtdt¢²=
³
Let f (t) = 5t – 3,
32
() .gt t t=
Then
394 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
11
32 4 3 2
00
,(53)()5830fg t t t dt t t tdt¢²=−− =+=
³³
23. The inner product is
1
0
,()(),fg ftgtdt¢²=
³
so
11
22 4 2
00
,(13) 9614/5,ff t dt t t dt¢²==+=
³³
and
|| || , 2 / 5.fff=¢²=
24. The inner product is
1
0
,()(),fg ftgtdt¢²=
³
so
11
322 6 54
00
,() 2 1/105,gg t t dt t t tdt¢²==+=
³³
and
|| || , 1/ 105.ggg=¢²=
25. The inner product is
1
1
,()().fg ftgtdt
¢²=
³
Then 1 and t are orthogonal because
1
1
1, 0.ttdt
¢²==
³
So 1 and t can be in an orthogonal basis for
2
Span{1, , }.tt
By the Gram-Schmidt process, the third
basis element in the orthogonal basis can be
22
2
,1 ,
1
1,1 ,
ttt
tt
tt
¢²¢²
−−
¢² ¢²
Since
1
22
1
,1 2 / 3,ttdt
¢²==
³
1
1
1,1 1 2,dt
¢²==
³
and
1
23
1
,0,tt tdt
¢
²==
³
the third basis element can
be written as
2
(1 / 3).t
This element can be scaled by 3, which gives the orthogonal basis as
2
{1, , 3 1} .tt
26. The inner product is
2
2
,()().fg ftgtdt
¢²=
³
Then 1 and t are orthogonal because
2
2
1, 0.ttdt
¢²==
³
So 1 and t can be in an orthogonal basis for
2
Span{1, , }.tt
By the Gram-Schmidt process, the third
basis element in the orthogonal basis can be
22
2
,1 ,
1
1,1 ,
ttt
tt
tt
¢²¢²
−−
¢² ¢²
Since
2
22
2
,1 16/3,ttdt
¢²==
³
2
2
1, 1 1 4,dt
¢²==
³
and
2
23
2
,0,tt tdt
¢
²==
³
the third basis element can
be written as
2
(4/3).t
This element can be scaled by 3, which gives the orthogonal basis as
2
{1, , 3 4} .tt
27. [M] The new orthogonal polynomials are multiples of
3
17 5tt+
and
24
72 155 35 .tt+
These
polynomials may be scaled so that their values at –2, –1, 0, 1, and 2 are small integers.
28. [M] The orthogonal basis is
0
() 1,ft=
1
() cos ,ft t=
2
2
() cos (1/2) (1/2)cos2,ft t t==
and
3
3
() cos (3/4)cos (1/4)cos3.ft t t t==
6.8 Solutions 395
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
6.8 SOLUTIONS
Notes
: The connections between this section and Section 6.7 are described in the notes for that section.
For my junior-senior class, I spend three days on the following topics: Theorems 13 and 15 in Section 6.5,
plus Examples 1, 3, and 5; Example 1 in Section 6.6; Examples 2 and 3 in Section 6.7, with the
motivation for the definite integral; and Fourier series in Section 6.8.
1. The weighting matrix W, design matrix X, parameter vector
β
, and observation vector y are:
0
1
10000 1 2 0
02000 1 1 0
,,,
00200 1 0 2
00020 1 1 4
00001 1 2 4
WX
β
β
ªºªºªº
«»«»«»
«»«»«»
ªº
«»«»«»
====
«»
«»«»«»
¬¼
«»«»«»
«»«»«»
¬¼¬¼¬¼
y
β
The design matrix X and the observation vector y are scaled by W:
12 0
22 0
,
20 4
22 8
12 4
WX W
ªºªº
«»«»
«»«»
«»«»
==
«»«»
«»«»
«»«»
¬¼¬¼
y
Further compute
14 0 28
() ,()
016 24
TT
WX WX WX W
ªº ªº
==
«» «»
¬¼ ¬¼
y
and find that
1
1/14 0 28 2
ˆ(( ) ) ( ) 01/1624 3/2
TT
WX WX WX W
ªºªºªº
===
«»«»«»
¬¼¬¼¬¼
y
β
Thus the weighted least-squares line is y = 2 + (3/2)x.
2. Let X be the original design matrix, and let y be the original observation vector. Let W be the
weighting matrix for the first method. Then 2W is the weighting matrix for the second method. The
weighted least-squares by the first method is equivalent to the ordinary least-squares for an equation
whose normal equation is
ˆ
() ()
TT
WX WX WX W=y
β
(1)
while the second method is equivalent to the ordinary least-squares for an equation whose normal
equation is
ˆ
(2 ) (2 ) (2 ) (2 )
TT
WX W X WX W=y
β
(2)
Since equation (2) can be written as
ˆ
4( ) 4( ) ,
TT
WX WX WX W
β
=y
it has the same solutions as
equation (1).
396 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
3. From Example 2 and the statement of the problem,
0
() 1,pt=
1
() ,pt t=
2
2
() 2,pt t=
3
3
() (5/6) (17/6),pt t t=
and g = (3, 5, 5, 4, 3). The cubic trend function for g is the orthogonal
projection
ˆ
p
of g onto the subspace spanned by
0
,p
1
,p
2
,p
and
3
:p
03
12
01 2 3
00 11 22 33
,,
,,
ˆ,,, ,
gp gp
gp gp
pp p p p
pp pp pp pp
¢² ¢²
¢² ¢²
=+++
¢²¢²¢²¢²
()
23
20 1 7 2 5 17
(1) 2
51014 1066
tt tt
−− §·
=++ +
¨¸
©¹
()
23 23
11 15 17 21 1
42 5
10 2 5 6 6 3 2 6
tt t t ttt
§·
=−− −+=−− +
¨¸
©¹
This polynomial happens to fit the data exactly.
4. The inner product is ¢ p, q² = p(–5)q(–5) + p(–3)q(–3) + p(–1)q(–1) + p(1)q(1) + p(3)q(3) + p(5)q(5).
a. Begin with the basis
2
{1, , }tt
for
2
. Since 1 and t are orthogonal, let
0
() 1pt=
and
1
() .pt t=
Then the Gram-Schmidt process gives
22
222
2
,1 , 70 35
() 1
1,1 , 6 3
ttt
pt t t t t
tt
¢²¢²
=−−==
¢² ¢²
The vector of values for
2
p
is (40/3, –8/3, –32/3, –32/3, –8/3, 40/3), so scaling by 3/8 yields the
new function
22
2
(3/ 8)( (35/3)) (3/8) (35/8).pt t==
b. The data vector is g = (1, 1, 4, 4, 6, 8). The quadratic trend function for g is the orthogonal
projection
ˆ
p
of g onto the subspace spanned by
0
p
,
1
p
and
2
p
:
2
012
01 2
00 11 22
,,,24506335
ˆ(1)
,,,6708488
gp gp gp
pp p p tt
pp pp pp
¢² ¢² ¢² §·
=++=++
¨¸
¢²¢²¢² ©¹
22
513 35595 3
47148 8 167112
tt tt
§·
=+ + =++
¨¸
©¹
5. The inner product is
2
0
,()().fg ftgtdt
π
¢²=
³
Let m n. Then
22
00
1
sin , sin sin sin cos(( ) ) cos(( ) ) 0
2
mt nt mt nt dt m n t m n t dt
ππ
¢²==−− +=
³³
Thus sin mt and sin nt are orthogonal.
6. The inner product is
2
0
,()().fg ftgtdt
π
¢²=
³
Let m and n be positive integers. Then
22
00
1
sin ,cos sin cos sin(( ) ) sin(( ) ) 0
2
mt nt mt nt dt m n t m n t dt
ππ
¢²==++=
³³
Thus sinmt and cosnt are orthogonal.
7. The inner product is
2
0
,()().fg ftgtdt
π
¢²=
³
Let k be a positive integer. Then
22
22
00
1
|| cos || cos ,cos cos 1 cos 2
2
kt kt kt kt dt kt dt
ππ
π
=¢²==+=
³³
6.8 Solutions 397
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
and
22
22
00
1
|| sin || sin ,sin sin 1 cos 2
2
kt kt kt kt dt kt dt
ππ
π
=¢²===
³³
8. Let f(t) = t – 1. The Fourier coefficients for f are:
22
0
00
11 1
() 1 1
22 2
aftdt t dt
ππ
π
ππ
===+
³³
and for k > 0,
22
00
11
()cos ( 1)cos 0
k
aftktdttktdt
ππ
ππ
===
³³
22
00
11 2
()sin ( 1)sin
k
bftktdttktdt
k
ππ
ππ
===
³³
The third-order Fourier approximation to f is thus
012 3
2
sin sin 2 sin 3 1 2 sin sin 2 sin 3
23
abtb tb t t t t
π
++ + =+−−
9. Let f(t) = 2
π
t. The Fourier coefficients for f are:
22
0
00
11 1
() 2
22 2
aftdt tdt
ππ
ππ
ππ
===
³³
and for k > 0,
22
00
11
()cos (2 )cos 0
k
aftktdt tktdt
ππ
π
ππ
===
³³
22
00
11 2
()sin (2 )sin
k
bftktdt tktdt
k
ππ
π
ππ
===
³³
The third-order Fourier approximation to f is thus
012 3
2
sin sin 2 sin 3 2 sin sin 2 sin 3
23
abtb tb t t t t
π
++ + =+++
10. Let 1for0
() .
1for 2
t
ft t
π
ππ
<
=®−≤<
¯ The Fourier coefficients for f are:
22
0
00
11 1 1
() 0
22 2 2
aftdt dt dt
πππ
π
πππ
===
³³³
and for k > 0,
22
00
111
()cos cos cos 0
k
aftktdt ktdt ktdt
πππ
π
πππ
===
³³³
22
00
4/( ) for odd
111
()sin sin sin 0foreven
k
kk
bftktdt ktdt ktdt k
πππ
π
π
πππ
===®
¯
³³³
The third-order Fourier approximation to f is thus
13
44
sin sin 3 sin sin 3
3
btb t t t
ππ
+=+
398 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
11. The trigonometric identity
2
cos 2 1 2 sintt=
shows that
2
11
sin cos 2
22
tt=
The expression on the right is in the subspace spanned by the trigonometric polynomials of order 3 or
less, so this expression is the third-order Fourier approximation to
2
sin t
.
12. The trigonometric identity
3
cos 3 4 cos 3 costtt=
shows that
3
31
cos cos cos 3
44
tt t=+
The expression on the right is in the subspace spanned by the trigonometric polynomials of order 3 or
less, so this expression is the third-order Fourier approximation to
3
cos .t
13. Let f and g be in C [0, 2π] and let m be a nonnegative integer. Then the linearity of the inner product
shows that
¢( f + g), cos mt² = ¢ f, cos mt² + ¢g, cos mt², ¢( f + g), sin mt² = ¢ f, sin mt² + ¢ g, sin mt²
Dividing these identities respectively by ¢cos mt, cos mt² and ¢sin mt, sin mt² shows that the Fourier
coefficients
m
a
and
m
b
for f + g are the sums of the corresponding Fourier coefficients of f and of g.
14. Note that g and h are both in the subspace H spanned by the trigonometric polynomials of order 2 or
less. Since h is the second-order Fourier approximation to f, it is closer to f than any other function in
the subspace H.
15. [M] The weighting matrix W is the 13 × 13 diagonal matrix with diagonal entries 1, 1, 1, .9, .9, .8, .7,
.6, .5, .4, .3, .2, .1. The design matrix X, parameter vector
β
, and observation vector y are:
23
23
23
23 0
23 1
2
23
3
23
23
23
23
23
10 0 0
0.0
11 1 1
8.8
122 2 29.9
133 3 62.0
144 4 104.7
155 5 159.1
166 6
,,
222.0
294.5
177 7
380.4
188 8
199 9
11010 10
11111 11
11212 12
X
β
β
β
β
ªº
«»
«»
«»
«»
«»
«»
«»
«»
ªº
«»
«»
«»
«»
===
«»
«»
«»
«»
«»
«»
¬¼
«»
«»
«»
«»
«»
«»
«»
«»
«»
¬¼
y
β
471.1
571.7
686.8
809.2
ª
º
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
«
»
¬
¼
The design matrix X and the observation vector y are scaled by W:
6.8 Solutions 399
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
1.0 0.0 0.0 0.0
1.0 1.0 1.0 1.0
1.0 2.0 4.0 8.0
.9 2.7 8.1 24.3
.9 3.6 14.4 57.6
.8 4.0 20.0 100.0
.7 4.2 25.2 151.2
.6 4.2 29.4 205.8
.5 4.0 32.0 256.0
.4 3.6 32.4 291.6
.3 3.0 30.0 300.0
.2 2.2 24.2 266.2
.1 1.2 14.4 172.8
WX
ªº
«»
«
«
«
«
«
«
«
«
=«
«
«
«
«
«
«
«
«
«
¬¼
0.00
8.80
29.90
55.80
94.23
127.28
,155.40
176.70
190.20
188.44
171.51
137.36
80.92
W
ª
º
«
»
»
«»
»
«»
»
«»
»
«»
»
«»
»
«»
»
«»
»
«»
=
»
«»
»
«»
»
«»
»
«»
»
«»
»
«»
»
«»
»
«»
»
«»
»
«»
¬
¼
y
Further compute
6.66 22.23 120.77 797.19 747.844
22.23 120.77 797.19 5956.13 4815.438
() ,()
120.77 797.19 5956.13 48490.23 35420.468
797.19 5956.13 48490.23 420477.17 285262.440
T T
WX WX WX W
ªºªº
«»«»
«»«»
==
«»«»
«»«»
«»«»
¬¼¬¼
y
and find that
1
0.2685
3.6095
ˆ(( ) ) ( ) 5.8576
0.0477
TT
WX WX WX W
ªº
«»
«»
==
«»
«»
«»
¬¼
y
β
Thus the weighted least-squares cubic is
23
() .2685 3.6095 5.8576 .0477 .ygt t t t==++
The
velocity at t = 4.5 seconds is g'(4.5) = 53.4 ft./sec. This is about 0.7% faster than the estimate
obtained in Exercise 13 of Section 6.6.
16. [M] Let 1for0
() .
1for 2
t
ft t
π
ππ
<
=®−≤<
¯ The Fourier coefficients for f have already been found to be
0
k
a=
for all k 0 and 4/( ) for odd .
0foreven
k
kk
bk
π
=®
¯ Thus
45
44 44 4
() sin sin3 and () sin sin3 sin5
335
ft t t ft t t t
ππ ππ π
=+ =+ +
A graph of
4
f
over the interval [0, 2
π
] is
400 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
A graph of
5
f
over the interval [0, 2
π
] is
A graph of
5
f
over the interval [–2
π
, 2
π
] is
Chapter 6 SUPPLEMENTARY EXERCISES
1. a. False. The length of the zero vector is zero.
b. True. By the displayed equation before Example 2 in Section 6.1, with c = –1,
|| –x || = || (–1)x || =| –1 ||| x || = || x ||.
c. True. This is the definition of distance.
d. False. This equation would be true if r|| v || were replaced by | r ||| v ||.
e. False. Orthogonal nonzero vectors are linearly independent.
f. True. If x u = 0 and x v = 0, then x (uv) = x ux v = 0.
g. True. This is the “only ifpart of the Pythagorean Theorem in Section 6.1.
h. True. This is the “only if” part of the Pythagorean Theorem in Section 6.1 where v is replaced
by –v, because
2
|| ||v
is the same as
2
|| ||v
.
i. False. The orthogonal projection of y onto u is a scalar multiple of u, not y (except when y
itself is already a multiple of u).
1
1
0.5
–0.5
–1
234 56
1
0.5
–0.5
–1
1234 56
1
0.5
–0.5
–1
–6 –4 –2 246
Chapter 6 Supplementary Exercises 401
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
j. True. The orthogonal projection of any vector y onto W is always a vector in W.
k. True. This is a special case of the statement in the box following Example 6 in Section 6.1 (and
proved in Exercise 30 of Section 6.1).
l. False. The zero vector is in both W and
.W
m. True. See Exercise 32 in Section 6.2. If
0,
ij
=vv
then
()( ) ( ) 00.
ii j j ij i j ij
cc cc cc===vv vv
n. False. This statement is true only for a square matrix. See Theorem 10 in Section 6.3.
o. False. An orthogonal matrix is square and has orthonormal columns.
p. True. See Exercises 27 and 28 in Section 6.2. If U has orthonormal columns, then
.
T
UU I=
If
U is also square, then the Invertible Matrix Theorem shows that U is invertible and
1
.
T
UU
=
In this case, ,
T
UU I= which shows that the columns of
T
U
are orthonormal; that is, the rows
of U are orthonormal.
q. True. By the Orthogonal Decomposition Theorem, the vectors
proj
Wv
and
proj
W
vv
are
orthogonal, so the stated equality follows from the Pythagorean Theorem.
r. False. A least-squares solution is a vector
ˆ
x
(not A
ˆ
x
) such that A
ˆ
x
is the closest point to b
in Col A.
s. False. The equation
ˆAA A
Τ−1Τ
=( )xb
describes the solution of the normal equations, not the
matrix form of the normal equations. Furthermore, this equation makes sense only when
T
AA
is invertible.
2. If
12
{, }
vv
is an orthonormal set and
11 2 2
,cc=+
xv v
then the vectors
11
c
v
and
22
c
v
are orthogonal
(Exercise 32 in Section 6.2). By the Pythagorean Theorem and properties of the norm
22222222
11 2 2 11 2 2 1 1 2 2 1 2
|| || || || || || || || ( || ||) ( || ||) | | | |cc c c c c c c=+ = + = + =+xvv v v v v
So the stated equality holds for p = 2. Now suppose the equality holds for p = k, with k 2. Let
11
{, , }
k
+
vv
be an orthonormal set, and consider
11 1 1 1 1
,
kk k k k k k
ccc c
++ ++
=++ + =+
xv v v u v
where
11
.
kkk
cc=++
uv v
Observe that
k
u
and
11
kk
c
++
v
are orthogonal because
1
0
jk+
=vv
for j
= 1,,k. By the Pythagorean Theorem and the assumption that the stated equality holds for k, and
because
222 2
11 1 1 1
|| || | | || || | | ,
kk k k k
cc c
++ + + +
==vv
222222
11 11 1 1
|| || || || || || || || | | | |
kkk k kk k
cccc
++ ++ +
=+ = + = ++xu v u v
Thus the truth of the equality for p = k implies its truth for p = k + 1. By the principle of induction,
the equality is true for all integers p 2.
3. Given x and an orthonormal set
1
{, , }
p
vv
in
n
, let
ˆ
x
be the orthogonal projection of x onto the
subspace spanned by
1
,,
p
vv
. By Theorem 10 in Section 6.3,
11
ˆ() ( ).
pp
=+…+ xxvv xvv
By
Exercise 2,
22 2
1
ˆ
|| || | | | | .
p
=+…+ xxv xv
Bessel’s inequality follows from the fact that
22
ˆ
|| || || || ,xx
which is noted before the proof of the Cauchy-Schwarz inequality in Section 6.7.
4. By parts (a) and (c) of Theorem 7 in Section 6.2,
1
{,, }
k
UU
vv
is an orthonormal set in
n
. Since
there are n vectors in this linearly independent set, the set is a basis for
n
.
402 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
5. Suppose that (U x)(U y) = xy for all x, y in
n
, and let
1
,,
n
ee
be the standard basis for
n
. For
j = 1, , n,
j
Ue
is the jth column of U. Since
2
|| || ( ) ( ) 1,
jjjjj
UUU===eeeee
the columns of U
are unit vectors; since
()() 0
jkjk
UU==eeee
for j k, the columns are pairwise orthogonal.
6. If Ux = λx for some x 0, then by Theorem 7(a) in Section 6.2 and by a property of the norm,
|| x || = || Ux || = || λx || = | λ ||| x ||, which shows that | λ | = 1, because x 0.
7. Let u be a unit vector, and let
2.
T
QI=uu
Since
() ,
TT TT T T
==uu u u uu
(2 ) 2( ) 2
TTTTTT
QI I I Q====uu uu uu
Then
22
(2 ) 2 2 4( )( )
TTTTTT
QQ Q I I===−−+uu uu uu uu uu
Since u is a unit vector, 1,
T
==uu uu so
()()()() ,
TT T T T
==uu uu u u u u uu
and
224
TTTT
QQ I I=−−+=uu uu uu
Thus Q is an orthogonal matrix.
8. a. Suppose that x y = 0. By the Pythagorean Theorem,
22 2
|| || || || || || .+=+xyxy
Since T preserves
lengths and is linear,
22 2 2
|| ( ) || || ( ) || || ( ) || || ( ) ( ) ||TTT TT+=+=+xyxyxy
This equation shows that T(x) and T(y) are orthogonal, because of the Pythagorean Theorem.
Thus T preserves orthogonality.
b . The standard matrix of T is
[]
1
() ( )
n
TTee
, where
1
,,
n
ee
are the columns of the identity
matrix. Then
1
{( ), , ( )}
n
TT
ee
is an orthonormal set because T preserves both orthogonality and
lengths (and because the columns of the identity matrix form an orthonormal set). Finally, a
square matrix with orthonormal columns is an orthogonal matrix, as was observed in Section 6.2.
9. Let W = Span{u, v}. Given z in
n
, let
ˆproj .
W
=
zz
Then
ˆ
z
is in Col A, where
[]
.A=uv
Thus
there is a vector, say,
ˆ
x
in
2
, with A
ˆ
x
=
ˆ
z
. So,
ˆ
x
is a least-squares solution of Ax = z. The normal
equations may be solved to find
ˆ
x
, and then
ˆ
z
may be found by computing Aˆ.x
10. Use Theorem 14 in Section 6.5. If c 0, the least-squares solution of Ax = c
b is given by
1
() (),
TT
AA A c
b
which equals
1
() ,
TT
cAA A
b
by linearity of matrix multiplication. This solution is
c times the least-squares solution of Ax = b.
11. Let
,
x
y
z
ªº
«»
=«»
«»
¬¼
x
,
a
b
c
ªº
«»
=«»
«»
¬¼
b
1
2,
5
ªº
«»
=
«»
«»
¬¼
v
and
125
125.
125
T
T
T
A
ªº
ª
º
«»
«
»
==
«»
«
»
«»
«
»
¬
¼
«»
¬¼
v
v
v
Then the given set of equations is
Ax = b, and the set of all least-squares solutions coincides with the set of solutions of the normal
equations
TT
AA A=
xb
. The column-row expansions of
T
AA and
T
A
b
give
3, ( )
TTTTTT
AA A a b c a b c=++= =++=++vv vv vv vv b v v v v
Chapter 6 Supplementary Exercises 403
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley
Thus
3( ) 3 ( ) 3( )
TTTT
AA ===xvvxvvx vxv
since
T
vx
is a scalar, and the normal equations have
become
3( ) ( ) ,
T
abc=++vxv v
so
3( ) ,
T
abc=++vx
or
()/3.
T
abc=++vx
Computing
T
vx
gives
the equation x – 2y + 5z = (a + b + c)/3 which must be satisfied by all least-squares solutions to Ax =
b.
12. The equation (1) in the exercise has been written as Vλ = b, where V is a single nonzero column
vector v, and b = Av. The least-squares solution
ˆ
λ
of Vλ = b is the exact solution of the normal
equations
.
TT
VV Vλ=
b
In the original notation, this equation is
.
TT
Aλ=
vv v v
Since
T
vv
is
nonzero, the least squares solution
ˆ
λ
is
/
().
TT
Avvvv
This expression is the Rayleigh quotient
discussed in the Exercises for Section 5.8.
13. a. The row-column calculation of Au shows that each row of A is orthogonal to every u in Nul A. So
each row of A is in
(Nul ) .A
Since
(Nul )A
is a subspace, it must contain all linear
combinations of the rows of A; hence
(Nul )A
contains Row A.
b . If rank A = r, then dim
Nul A = n r by the Rank Theorem. By Exercsie 24(c) in Section 6.3,
dimNul dim(Nul ) ,AAn
+=
so
dim(Nul )A
must be r. But Row A is an r-dimensional
subspace of
(Nul )A
by the Rank Theorem and part (a). Therefore,
Row (Nul ) .AA
=
c . Replace A by
T
A in part (b) and conclude that
Row (Nul ) .
TT
AA
=
Since
Row Col ,
T
AA=
Col (Nul ) .
T
AA
=
14. The equation Ax = b has a solution if and only if b is in Col A. By Exercise 13(c), Ax = b has a
solution if and only if b is orthogonal to
Nul .
T
A
This happens if and only if b is orthogonal to all
solutions of
.
T
A=
x0
15. If
T
AURU=
with U orthogonal, then A is similar to R (because U is invertible and
1
T
UU
=
), so A
has the same eigenvalues as R by Theorem 4 in Section 5.2. Since the eigenvalues of R are its n real
diagonal entries, A has n real eigenvalues.
16. a. If
[]
12
,
n
U=…uu u
then
[]
11 2
.
n
AU A A=λuu u
Since
1
u
is a unit vector and
2
,,
n
uu
are orthogonal to
1
,
u
the first column of
T
UAU
is
11 1 1 11
() .
TT
UUλ=λ=λuue
b . From (a),
1
1
****
0
0
T
UAU A
λ
ªº
«»
«»
=«»
«»
«»
¬¼
#
View
T
UAU
as a 2 × 2 block upper triangular matrix, with
1
A
as the (2, 2)-block. Then from
Supplementary Exercise 12 in Chapter 5,
11 1 11 1 1
det( ) det(( ) ) det( ) ( ) det( )
T
nnn
UAU I I A I A I
−−
−λ =λ−λ ⋅ λ =λ−λ⋅ −λ
This shows that the eigenvalues of ,
T
UAU
namely,
1
,, ,
n
λλ
consist of
1
λ
and the eigenvalues
of
1
A
. So the eigenvalues of
1
A
are
2
,,.
n
λλ
404 CHAPTER 6 Orthogonality and Least Squares
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
17. [M] Compute that || x ||/|| x || = .4618 and
4
cond( ) (|| || / || ||) 3363 (1.548 10 ) .5206A
×=××=bb
. In
this case, || x ||/|| x || is almost the same as cond(A) × || b ||/|| b ||.
18. [M] Compute that || x ||/|| x || = .00212 and cond(A) × (|| b ||/|| b ||) = 3363 × (.00212) 7.130. In
this case, || x ||/|| x || is almost the same as || b ||/|| b ||, even though the large condition number
suggests that || x ||/|| x || could be much larger.
19. [M] Compute that
8
|| || / || || 7.178 10
=×xx
and
4
cond( ) (|| || / || ||) 23683 (2.832 10 )A
×=××=bb
6.707.
Observe that the relative change in x is much smaller than the relative change in b. In fact the
theoretical bound on the relative change in x is 6.707 (to four significant figures). This exercise
shows that even when a condition number is large, the relative error in the solution need not be as
large as you suspect.
20. [M] Compute that || x ||/|| x || = .2597 and
5
cond( ) (|| || / || ||) 23683 (1.097 10 ) .2598A
×=××=bb
.
This calculation shows that the relative change in x, for this particular b and b, should not exceed
.2598. In this case, the theoretical maximum change is almost acheived.
Copyright © 2012 Pea
r
7.1 SOLUTIONS
Notes:
Students can profit by reviewin
working on this section. Theorems 1 an
d
sections that follow. Note that symmetri
c
text have real entries, as mentioned at
been constructed so that mastery of the
G
Theorem 2 is easily proved for the
2
If
,
ab
Acd
ªº
=«»
¬¼
then
(
1
2
a
λ=
If b = 0 there is nothing to prove. Other
w
diagonalizable. In each case, an eigenve
c
1. Since
35 ,
57
T
AA
ªº
==
«»
¬¼
the matri
2. Since
35 ,
53
T
AA
ªº
=
«»
¬¼
the matri
3. Since
22 ,
44
T
AA
ªº
=
«»
¬¼
the matri
x
4. Since
083
802 ,
320
T
AA
ªº
«»
==
«»
«»
¬¼
the
5. Since
620
062 ,
006
T
AA
ªº
«»
=−≠
«»
«»
¬¼
th
r
son Education, Inc. Publishing as Addison-Wesley.
n
g Section 5.3 (focusing on the Diagonalization The
o
d
2 and the calculations in Examples 2 and 3 are imp
o
c matri
x
means real symmetric matrix, because all m
a
the beginning of this chapter. The exercises in this
s
G
ra
m
-Schmidt process is not needed.
2
× 2 case:
)
22
()4.
a
dad b+
w
ise, there are two distinct eigenvalues, so A must be
c
tor for λ is .
d
b
−λ
ªº
«»
¬¼
i
x is symmetric.
i
x is not symmetric.
x
is not symmetric.
matrix is symmetric.
h
e matrix is not symmetric.
405
o
rem) before
o
rtant for the
a
trices in the
s
ection have
406 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
6. Since A is not a square matrix
T
AA
and the matrix is not symmetric.
7. Let .6 .8 ,
.8 .6
Pªº
=«»
¬¼
and compute that
2
.6 .8 .6 .8 1 0
.8 .6 .8 .6 0 1
T
PP I
ªºªºªº
===
«»«»«»
−−
¬¼¬¼¬¼
Since P is a square matrix, P is orthogonal and
1
.6 .8 .
.8 .6
T
PP
ª
º
==
«
»
¬
¼
8. Let
1/ 2 1/ 2 ,
1/ 2 1/ 2
P
ªº
=«»
«»
¬¼
and compute that
2
1/ 2 1/ 2 1/ 2 1/ 2 1 0
01
1/ 2 1/ 2 1/ 2 1/ 2
T
PP I
ªºªº
ªº
===
«»«»
«»
¬¼
«»«»
¬¼¬¼
Since P is a square matrix, P is orthogonal and
1
1/ 2 1/ 2 .
1/ 2 1/ 2
T
PP
ª
º
==
«
»
«
»
¬
¼
9. Let 52
,
25
P
ªº
=«»
¬¼
and compute that
2
52 52 29 0
25 25 029
T
PP I
−−
ªºªºª º
==
«»«»« »
¬¼¬¼¬ ¼
Thus P is not orthogonal.
10. Let
122
212,
221
P
ªº
«»
=
«»
«»
¬¼
and compute that
3
122122 900
212212 090
221221 009
T
PP I
−−
ªºªºªº
«»«»«»
=−−=
«»«»«»
«»«»«»
−−
¬¼¬¼¬¼
Thus P is not orthogonal.
11. Let
2/3 2/3 1/3
01/52/5,
5/3 4/ 45 2/ 45
P
ªº
«»
=
«»
«»
−−
¬¼
and compute that
7.1 Solutions 407
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
3
2/3 0 5/3 2/3 2/3 1/3 100
2/3 1/ 5 4/ 45 0 1/ 5 2/ 5 0 1 0
001
1/3 2/ 5 2/ 45 5 /3 4/ 45 2/ 45
T
PP I
ªº
ªº
ªº
«»
«»
«»
=−−==
«»
«»
«»
«»
«»
«»
−− − − ¬¼
«»
¬¼
¬¼
Since P is a square matrix, P is orthogonal and
1
2/3 0 5/3
2/3 1/ 5 4/ 45 .
1/3 2/ 5 2/ 45
T
PP
ª
º
«
»
==
«
»
«
»
−−
«
»
¬
¼
12. Let
.5 .5 .5 .5
.5 .5 .5 .5 ,
.5 .5 .5 .5
.5 .5 .5 .5
P
−−
ªº
«»
−−
«»
=«»
«»
−−
«»
¬¼
and compute that
4
.5 .5 .5 .5 .5 .5 .5 .5 1 0 0 0
.5 .5 .5 .5 .5 .5 .5 .5 0 1 0 0
.5 .5 .5 .5 .5 .5 .5 .5 0 0 1 0
.5 .5 .5 .5 .5 .5 .5 .5 0 0 0 1
T
PP I
−− −
ªºªºªº
«»«»«»
−−
«»«»«»
===
«»«»«»
−−
«»«»«»
−−−−
«»«»«»
¬¼¬¼¬¼
Since P is a square matrix, P is orthogonal and
1
.5 .5 .5 .5
.5 .5 .5 .5 .
.5 .5 .5 .5
.5 .5 .5 .5
T
PP
−−
ª
º
«
»
«
»
==
«
»
−−
«
»
−−
«
»
¬
¼
13. Let 31
.
13
Aªº
=«»
¬¼
Then the characteristic polynomial of A is
22
(3 ) 1 6 8 ( 4)( 2),−λ =λ−λ+=λ− λ−
so the eigenvalues of A are 4 and 2. For λ = 4, one computes that a basis for the eigenspace is 1,
1
ª
º
«
»
¬
¼
which can be normalized to get
1
1/ 2 .
1/ 2
ªº
=«»
«»
¬¼
u
For λ = 2, one computes that a basis for the eigenspace
is 1,
1
ªº
«»
¬¼
which can be normalized to get
2
1/ 2 .
1/ 2
ª
º
=
«
»
«
»
¬
¼
u
Let
[]
12
1/ 2 1/ 2 4 0
and 02
1/ 2 1/ 2
PD
ªº
ª
º
== =
«»
«
»
¬
¼
«»
¬¼
uu
Then P orthogonally diagonalizes A, and
1
APDP
=
.
408 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
14. Let 15
.
51
Aªº
=«»
¬¼
Then the characteristic polynomial of A is
22
(1 ) 25 2 24 ( 6)( 4),−λ − =λ−λ=λ− λ+
so the eigenvalues of A are 6 and –4. For λ = 6, one
computes that a basis for the eigenspace is 1,
1
ª
º
«
»
¬
¼ which can be normalized to get
1
1/ 2 .
1/ 2
ªº
=«»
«»
¬¼
u
For
λ = –4, one computes that a basis for the eigenspace is 1,
1
ª
º
«
»
¬
¼ which can be normalized to get
2
1/ 2 .
1/ 2
ªº
=«»
«»
¬¼
u
Let
[]
12
1/ 2 1/ 2 6 0
and 04
1/ 2 1/ 2
PD
ªº
ª
º
== =
«»
«
»
¬
¼
«»
¬¼
uu
Then P orthogonally diagonalizes A, and
1.APDP
=
15. Let 16 4 .
41
A
ªº
=«»
¬¼
Then the characteristic polynomial of A is
2
(16 )(1 ) 16 17 ( 17)−λ −λ − =λ− λ=λ− λ
, so the eigenvalues of A are 17 and 0. For λ = 17, one
computes that a basis for the eigenspace is 4,
1
ª
º
«
»
¬
¼ which can be normalized to get
1
4/ 17 .
1/ 17
ªº
=«»
«»
¬¼
u
For λ = 0, one computes that a basis for the eigenspace is 1
4
ª
º
«
»
¬
¼, which can be normalized to get
2
1/ 17 .
4/ 17
ªº
=«»
«»
¬¼
u
Let
[]
12
4/ 17 1/ 17 17 0
and 00
1/ 17 4/ 17
PD
ªº
ª
º
== =
«»
«
»
¬
¼
«»
¬¼
uu
Then P orthogonally diagonalizes A, and
1.APDP
=
16. Let 724
.
24 7
A
ªº
=«»
¬¼
Then the characteristic polynomial of A is
2
(7 )(7 ) 576 625−−λ −λ =λ− =
(25)(25)λ− λ+
, so the eigenvalues of A are 25 and –25. For λ = 25, one computes that a basis for
the eigenspace is 3,
4
ªº
«»
¬¼ which can be normalized to get
1
3/5 .
4/5
ª
º
=
«
»
¬
¼
u
For λ = –25, one computes that a
basis for the eigenspace is 4,
3
ªº
«»
¬¼
which can be normalized to get
2
4/5 .
3/5
ª
º
=
«
»
¬
¼
u
Let
7.1 Solutions 409
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
[]
12
3/5 4/5 25 0
and
4/5 3/5 0 25
PD
ª
ºªº
== =
«
»«»
¬
¼¬¼
uu
Then P orthogonally diagonalizes A, and
1
APDP
=
.
17. Let
113
131.
311
A
ªº
«»
=«»
«»
¬¼
The eigenvalues of A are 5, 2, and –2. For λ = 5, one computes that a basis for
the eigenspace is
1
1,
1
ªº
«»
«»
«»
¬¼
which can be normalized to get
1
1/ 3
1/ 3 .
1/ 3
ª
º
«
»
=
«
»
«
»
«
»
¬
¼
u
For λ = 2, one computes that a
basis for the eigenspace is
1
2,
1
ªº
«»
«»
«»
¬¼
which can be normalized to get
2
1/ 6
2/ 6 .
1/ 6
ª
º
«
»
=
«
»
«
»
«
»
¬
¼
u
For λ = –2, one
computes that a basis for the eigenspace is
1
0,
1
ª
º
«
»
«
»
«
»
¬
¼
which can be normalized to get
3
1/ 2
0.
1/ 2
ªº
«»
=«»
«»
¬¼
u
Let
[]
123
1/ 3 1/ 6 1/ 2 50 0
1/ 3 2/ 6 0 and 0 2 0
00 2
1/ 3 1/ 6 1 2
PD
ªº
ª
º
«»
«
»
===
«»
«
»
«»
«
»
¬
¼
«»
¬¼
uu u
Then P orthogonally diagonalizes A, and
1
APDP
=
.
18. Let
2360
36 23 0 .
003
A
−−
ªº
«»
=−−
«»
«»
¬¼
The eigenvalues of A are 25, 3, and –50. For λ = 25, one computes that a
basis for the eigenspace is
4
3,
0
ªº
«»
«»
«»
¬¼
which can be normalized to get
1
4/5
3/5 .
0
ª
º
«
»
=
«
»
«
»
¬
¼
u For λ = 3, one
computes that a basis for the eigenspace is
0
0,
1
ª
º
«
»
«
»
«
»
¬
¼
which is of length 1, so
2
0
0.
1
ªº
«»
=«»
«»
¬¼
u For λ = –50, one
computes that a basis for the eigenspace is
3
4,
0
ª
º
«
»
«
»
«
»
¬
¼
which can be normalized to get
3
3/5
4/5 .
0
ªº
«»
=«»
«»
¬¼
u Let
410 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
[]
123
4/5 0 3/5 25 0 0
3/5 0 4/5 and 0 3 0
01 0 00 50
PD
ªºªº
«»«»
== =
«»«»
«»«»
¬¼¬¼
uu u
Then P orthogonally diagonalizes A, and
1
APDP
=
.
19. Let
324
262.
423
A
ªº
«»
=
«»
«»
¬¼
The eigenvalues of A are 7 and –2. For λ = 7, one computes that a basis for
the eigenspace is
11
2,0 .
01
½
ªºªº
°°
«»«»
®¾
«»«»
°°
«»«»
¬¼¬¼
¯¿
This basis may be converted via orthogonal projection to an
orthogonal basis for the eigenspace:
14
2,2 .
05
½
ª
ºªº
°°
«
»«»
®¾
«
»«»
°°
«
»«»
¬
¼¬¼
¯¿
These vectors can be normalized to get
1
1/ 5
2/ 5 ,
0
ªº
«»
=«»
«»
«»
¬¼
u
2
4/ 45
2/ 45 .
5/ 45
ª
º
«
»
=
«
»
«
»
«
»
¬
¼
u
For λ = –2, one computes that a basis for the eigenspace is
2
1,
2
ªº
«»
«»
«»
¬¼
which can be normalized to get
3
2/3
1/3 .
2/3
ªº
«»
=
«»
«»
¬¼
u Let
[]
123
1/ 5 4/ 45 2/3 70 0
2/ 5 2/ 45 1/3 and 0 7 0
00 2
05/45 2/3
PD
ªº
−−
º
«»
»
== =
«»
»
«»
»
¼
«»
¬¼
uu u
Then P orthogonally diagonalizes A, and
1
APDP
=
.
20. Let
744
450.
409
A
ªº
«»
=
«»
«»
¬¼
The eigenvalues of A are 13, 7, and 1. For λ = 13, one computes that a basis
for the eigenspace is
2
1,
2
ª
º
«
»
«
»
«
»
¬
¼
which can be normalized to get
1
2/3
1/3 .
2/3
ª
º
«
»
=
«
»
«
»
¬
¼
u For λ = 7, one computes
7.1 Solutions 411
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
that a basis for the eigenspace is
1
2,
2
ªº
«»
«»
«»
¬¼
which can be normalized to get
2
1/3
2/3 .
2/3
ªº
«»
=«»
«»
¬¼
u For λ = 1, one
computes that a basis for the eigenspace is
2
2,
1
ª
º
«
»
«
»
«
»
¬
¼
which can be normalized to get
3
2/3
2/3 .
1/3
ªº
«»
=«»
«»
¬¼
u Let
[]
123
2/3 1/3 2/3 13 0 0
1/3 2/3 2/3 and 0 7 0
2/3 2/3 1/3 0 0 1
PD
ªºªº
«»«»
===
«»«»
«»«»
¬¼¬¼
uu u
Then P orthogonally diagonalizes A, and
1
APDP
=
.
21. Let
4131
1413
.
3141
1314
A
ªº
«»
«»
=«»
«»
«»
¬¼
The eigenvalues of A are 9, 5, and 1. For λ = 9, one computes that a basis
for the eigenspace is
1
1,
1
1
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
which can be normalized to get
1
1/ 2
1/ 2 .
1/ 2
1/ 2
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
u
For λ = 5, one computes that a
basis for the eigenspace is
1
1,
1
1
ªº
«»
«»
«»
«»
«»
¬¼
which can be normalized to get
2
1/2
1/2 .
1/2
1/2
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
u
For λ = 1, one
computes that a basis for the eigenspace is
10
01
,.
10
01
½
ª
ºª º
°
°
«
»« »
°
°
«
»« »
®
¾
«
»« »
°
°
«
»« »
°
°
«
»« »
¬
¼¬ ¼
¯¿
This basis is an orthogonal basis for the
eigenspace, and these vectors can be normalized to get
3
1/ 2
0,
1/ 2
0
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
u
4
0
1/ 2 .
0
1/ 2
ªº
«»
«»
=«»
«»
«»
¬¼
u
Let
[]
1234
1/2 1/2 1/ 2 0 9000
1/2 1/2 0 1/ 2 0 5 0 0
and 0010
1/2 1/2 1/ 2 0
0001
1/2 1/2 0 1/ 2
PD
ªº
−−
ª
º
«»
«
»
«»
«
»
== =
«»
«
»
«»
«
»
«»
«
»
¬
¼
¬¼
uu uu
412 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Then P orthogonally diagonalizes A, and
1
APDP
=
.
22. Let
2000
0101
.
0020
0101
A
ªº
«»
«»
=«»
«»
«»
¬¼
The eigenvalues of A are 2 and 0. For λ = 2, one computes that a basis for
the eigenspace is
100
010
,, .
001
010
½
ªºªºªº
°°
«»«»«»
°°
«»«»«»
®¾
«»«»«»
°°
«»«»«»
°°
«»«»«»
¬¼¬¼¬¼
¯¿
This basis is an orthogonal basis for the eigenspace, and these
vectors can be normalized to get
1
1
0,
0
0
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
u
2
0
1/ 2 ,
0
1/ 2
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
u
and
3
0
0.
1
0
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
u
For λ = 0, one computes
that a basis for the eigenspace is
0
1,
0
1
ªº
«»
«»
«»
«»
«»
¬¼
which can be normalized to get
4
0
1/ 2 .
0
1/ 2
ªº
«»
«»
=«»
«»
«»
¬¼
u
Let
[]
1234
100 0 2000
01/2 0 1/2 0 200
and
001 0 0020
0000
01/2 0 1/2
PD
ªº
ª
º
«»
«
»
«»
«
»
== =
«»
«
»
«»
«
»
«
»
«»
¬
¼
¬¼
uu u u
Then P orthogonally diagonalizes A, and
1
APDP
=
.
23. Let
311
131
113
A
ªº
«»
=«»
«»
¬¼
. Since each row of A sums to 5,
131115 1
11311551
111315 1
A
ª
ºª ºªºªº ªº
«
»« »«»«» «»
===
«
»« »«»«» «»
«
»« »«»«» «»
¬
¼¬ ¼¬¼¬¼ ¬¼
and
1
1
1
ªº
«»
«»
«»
¬¼
is an eigenvector of A with corresponding eigenvalue λ = 5. The eigenvector may be
normalized to get
1
1/ 3
1/ 3
1/ 3
ª
º
«
»
=
«
»
«
»
«
»
¬
¼
u
. For λ = 2, one computes that a basis for the eigenspace is
11
1, 0 ,
01
½−−
ªºªº
°°
«»«»
®¾
«»«»
°°
«»«»
¬¼¬¼
¯¿
so λ = 2 is an eigenvalue of A. This basis may be converted via orthogonal
7.1 Solutions 413
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
projection to an orthogonal basis
11
1, 1
02
½−−
ªºªº
°°
«»«»
®¾
«»«»
°°
«»«»
¬¼¬¼
¯¿
for the eigenspace, and these vectors can be
normalized to get
2
1/ 2
1/ 2
0
ª
º
«
»
=
«
»
«
»
«
»
¬
¼
u
and
3
1/ 6
1/ 6 .
2/ 6
ª
º
«
»
=
«
»
«
»
«
»
¬
¼
u
Let
[]
123
1/ 3 1/ 2 1/ 6 500
1/ 3 1/ 2 1/ 6 and 0 2 0
002
1/ 3 0 2/ 6
PD
ªº
−−
ª
º
«»
«
»
== =
«»
«
»
«»
«
»
¬
¼
«»
¬¼
uu u
Then P orthogonally diagonalizes A, and
1
APDP
=
.
24. Let
542
452.
222
A
−−
ªº
«»
=
«»
«»
¬¼
One may compute that
220 2
220102
110 1
A
−− −
ª
ºª º ªº
«
»« » «»
==
«
»« » «»
«
»« » «»
¬
¼¬ ¼ ¬¼
, so
1
2
2
1
ªº
«»
=«»
«»
¬¼
v is an
eigenvector of A with associated eigenvalue
110λ=
. Likewise one may compute that
11 1
1111
00 0
A
ªº ªº ªº
«» «» «»
==
«» «» «»
«» «» «»
¬¼ ¬¼ ¬¼
, so
1
1
0
ª
º
«
»
«
»
«
»
¬
¼
is an eigenvector of A with associated eigenvalue
21λ=
. For
21λ=
, one
computes that a basis for the eigenspace is
11
1,0 .
02
½
ª
ºªº
°
°
«
»«»
®
¾
«
»«»
°
°
«
»«»
¬
¼¬¼
¯¿
This basis may be converted via orthogonal
projection to an orthogonal basis for the eigenspace:
{}
23
11
,1,1.
04
½
ª
ºª º
°
°
«
»« »
=
®
¾
«
»« »
°
°
«
»« »
¬
¼¬ ¼
¯¿
vv
The eigenvectors
1
v
,
2
v
, and
3
v
may be normalized to get the vectors
1
2/3
2/3 ,
1/3
ª
º
«
»
=
«
»
«
»
¬
¼
u
2
1/ 2
1/ 2 ,
0
ª
º
«
»
=
«
»
«
»
«
»
¬
¼
u
and
3
1/ 18
1/ 18 .
4/ 18
ªº
«»
=
«»
«»
«»
¬¼
u
Let
[]
123
2/3 1/ 2 1/ 18 10 0 0
2/3 1/ 2 1/ 18 and 0 1 0
001
1/3 0 4/ 18
PD
ªº
ª
º
«»
«
»
== =
«»
«
»
«»
«
»
¬
¼
«»
¬¼
uu u
Then P orthogonally diagonalizes A, and
1
APDP
=
.
25. a. True. See Theorem 2 and the paragraph preceding the theorem.
414 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b . True. This is a particular case of the statement in Theorem 1, where u and v are nonzero.
c . False. There are n real eigenvalues (Theorem 3), but they need not be distinct (Example 3).
d . False. See the paragraph following formula (2), in which each u is a unit vector.
26. a. True. See Theorem 2.
b . True. See the displayed equation in the paragraph before Theorem 2.
c . False. An orthogonal matrix can be symmetric (and hence orthogonally diagonalizable), but not
every orthogonal matrix is symmetric. See the matrix P in Example 2.
d . True. See Theorem 3(b).
27. Since A is symmetric,
()
TTTTTTT
BAB BAB BAB==
, and
T
BAB
is symmetric. Applying this result
with A = I gives
T
BB
is symmetric. Finally,
()
TT TT T T
BB B B BB==
, so
T
BB
is symmetric.
28. Let A be an n × n symmetric matrix. Then
() () ()
TTTT
AA AAA====xy xy x y x y x y
since
T
AA=
.
29. Since A is orthogonally diagonalizable,
1
APDP
=
, where P is orthogonal and D is diagonal. Since
A is invertible,
11111
()APDP PDP
−−
==
. Notice that
1
D
is a diagonal matrix, so
1
A
is
orthogonally diagonalizable.
30. If A and B are orthogonally diagonalizable, then A and B are symmetric by Theorem 2. If AB = BA,
then
() ()
TTTT
AB BA A B AB===
. So AB is symmetric and hence is orthogonally diagonalizable by
Theorem 2.
31. The Diagonalization Theorem of Section 5.3 says that the columns of P are linearly independent
eigenvectors corresponding to the eigenvalues of A listed on the diagonal of D. So P has exactly k
columns of eigenvectors corresponding to λ. These k columns form a basis for the eigenspace.
32. If
1
,APRP
=
then
1
.PAP R
=
Since P is orthogonal,
T
R
PAP=
. Hence
()
TTTTTTT
RPAPPAP== =
,
T
PAP R=
which shows that R is symmetric. Since R is also upper triangular, its entries above the
diagonal must be zeros to match the zeros below the diagonal. Thus R is a diagonal matrix.
33. It is previously been found that A is orthogonally diagonalized by P, where
[]
123
1/ 2 1/ 6 1/ 3 800
1/ 2 1/ 6 1/ 3 and 0 6 0
003
02/61/3
PD
ªº
−−
ª
º
«»
«
»
===
«»
«
»
«»
«
»
¬
¼
«»
¬¼
uu u
Thus the spectral decomposition of A is
111 2 2 2 3 3 3 11 2 2 3 3
86 3
TTTTTT
A=+ + =++uu u u uu uu u u uu
7.1 Solutions 415
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1/ 2 1/ 2 0 1/6 1/6 2/ 6 1/3 1/3 1/3
81/2 1/20 61/6 1/6 2/6 31/31/31/3
000 2/62/64/61/31/31/3
−−
ªºª ºªº
«»« »«»
=++
«»« »«»
«»« »«»
−−
¬¼¬ ¼¬¼
34. It is previously been found that A is orthogonally diagonalized by P, where
[]
123
1/ 2 1/ 18 2/3 70 0
04/18 1/3and 070
00 2
1/ 2 1/ 18 2/3
PD
ªº
−−
ª
º
«»
«
»
== =
«»
«
»
«»
«
»
¬
¼
«»
¬¼
uu u
Thus the spectral decomposition of A is
111 2 2 2 33 3 11 2 2 3 3
77 2
TTTTTT
A=+ + =+uu u u u u uu u u u u
1/ 2 0 1/ 2 1/18 4/18 1/18 4/9 2/9 4/9
700 074/1816/18 4/1822/9 1/9 2/9
1/ 2 0 1/ 2 1/18 4/18 1/18 4/9 2/9 4/9
−− −
ªºª ºª º
«»« »« »
=+−−
«»« »« »
«»« »« »
−−
¬¼¬ ¼¬ ¼
35. a. Given x in
n
,
() ()(),
TTT
b===xuuxuux uxu
because
T
ux
is a scalar. So Bx = (x u)u. Since
u is a unit vector, Bx is the orthogonal projection of x onto u.
b . Since
() ,
TTTTTTT
BB====uu u u uu
B is a symmetric matrix. Also,
2
()()()
TT TT T
BB====uu uu u u u u uu
because
1.
T=uu
c . Since
1
T=uu
,
() ()(1)
TT
B====uuuuuuuu u
, so u is an eigenvector of B with corresponding
eigenvalue 1.
36. Given any y in
n
, let
ˆ
y
= By and z = y
ˆ
y
. Suppose that
T
BB=
and
2
BB=
. Then
.
T
BB BB B==
a. Since
ˆˆ ˆ
()()()() () 0
TTTTT
BBBBBBBBB=−⋅ =⋅−===zy y y y y y y y y y y y y y y y
, z is
orthogonal to
ˆ.y
b. Any vector in W = Col B has the form Bu for some u. Noting that B is symmetric, Exercise 28
gives
( y
ˆ
y
) (Bu) = [B(y
ˆ
y
)] u = [ByBBy] u = 0
since
2.BB=
So y
ˆ
y
is in
,W
and the decomposition y =
ˆ
y
+ (y
ˆ
y
) expresses y as the sum
of a vector in W and a vector in
.W
By the Orthogonal Decomposition Theorem in Section 6.3,
this decomposition is unique, and so
ˆ
y
must be
proj .
Wy
416 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
37. [M] Let
5296
2569
.
9652
6925
A
ªº
«»
«»
=«»
«»
«»
¬¼
The eigenvalues of A are 18, 10, 4, and –12. For λ = 18, one
computes that a basis for the eigenspace is
1
1,
1
1
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
which can be normalized to get
1
1/ 2
1/ 2 .
1/ 2
1/ 2
ªº
«»
«»
=«»
«»
«»
¬¼
u
For
λ = 10, one computes that a basis for the eigenspace is
1
1
1
1
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
, which can be normalized to get
2
1/ 2
1/ 2 .
1/ 2
1/ 2
ªº
«»
«»
=«»
«»
«»
¬¼
u
For λ = 4, one computes that a basis for the eigenspace is
1
1
1
1
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
, which can be
normalized to get
3
1/ 2
1/ 2 .
1/ 2
1/ 2
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
u
For λ = –12, one computes that a basis for the eigenspace is
1
1,
1
1
ªº
«»
«»
«»
«»
«»
¬¼
which can be normalized to get
4
1/2
1/2 .
1/2
1/2
ªº
«»
«»
=«»
«»
«»
¬¼
u
Let
[]
1234
1/ 2 1/ 2 1/ 2 1/ 2
1/ 2 1/ 2 1/ 2 1/ 2
1/ 2 1/ 2 1/ 2 1/ 2
1/ 2 1/ 2 1/ 2 1/ 2
P
ªº
«»
«»
==
«»
−−
«»
«»
¬¼
uu uu
and
18 0 0 0
010 0 0
.
004 0
000 12
D
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
Then P
orthogonally diagonalizes A, and
1
APDP
=
.
38. [M] Let
.38 .18 .06 .04
.18 .59 .04 .12 .
.06 .04 .47 .12
.04 .12 .12 .41
A
−−−
ªº
«»
−−
«»
=«»
−− −
«»
−−
«»
¬¼
The eigenvalues of A are .25, .30, .55, and .75. For λ =
.25, one computes that a basis for the eigenspace is
4
2,
2
1
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
which can be normalized to get
1
.8
.4 .
.4
.2
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
u
For
7.1 Solutions 417
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
λ = .30, one computes that a basis for the eigenspace is
1
2,
2
4
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
which can be normalized to get
2
.2
.4 .
.4
.8
ªº
«»
«»
=«»
«»
«»
¬¼
u
For λ = .55, one computes that a basis for the eigenspace is
2
1,
4
2
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
which can be
normalized to get
3
.4
.2 .
.8
.4
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
u
For λ = .75, one computes that a basis for the eigenspace is
2
4,
1
2
ªº
«»
«»
«»
«»
«»
¬¼
which can be normalized to get
4
.4
.8 .
.2
.4
ªº
«»
«»
=«»
«»
«»
¬¼
u
Let
[]
1234
.8 .2 .4 .4
.4 .4 .2 .8
.4 .4 .8 .2
.2 .8 .4 .4
P
−−
ª
º
«
»
−−
«
»
==
«
»
−−
«
»
«
»
¬
¼
uu uu
and
.25 0 0 0
0.30 0 0
.
00.550
000.75
D
ªº
«»
«»
=«»
«»
«»
¬¼
Then P orthogonally diagonalizes A, and
1
APDP
=
.
39. [M] Let
.31 .58 .08 .44
.58 .56 .44 .58 .
.08 .44 .19 .08
.44 .58 .08 .31
A
ªº
«»
−−
«»
=«»
«»
−−
«»
¬¼
The eigenvalues of A are .75, 0, and –1.25. For λ = .75, one
computes that a basis for the eigenspace is
13
02
,.
02
10
½
ª
ºªº
°
°
«
»«»
°
°
«
»«»
®
¾
«
»«»
°
°
«
»«»
°
°
«
»«»
¬
¼¬¼
¯¿
This basis may be converted via orthogonal
projection to the orthogonal basis
13
04
,.
04
13
½
ªºª º
°°
«»« »
°°
«»« »
®¾
«»« »
°°
«»« »
°°
«»« »
¬¼¬ ¼
¯¿
These vectors can be normalized to get
1
1/ 2
0,
0
1/ 2
ªº
«»
«»
=«»
«»
«»
¬¼
u
2
3/ 50
4/ 50 .
4/ 50
3/ 50
ªº
«»
«»
=«»
«»
«»
¬¼
u
For λ = 0, one computes that a basis for the eigenspace is
2
1,
4
2
ªº
«»
«»
«»
«»
«»
¬¼
418 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
which can be normalized to get
3
.4
.2 .
.8
.4
ªº
«»
«»
=«»
«»
«»
¬¼
u
For λ = –1.25, one computes that a basis for the
eigenspace is
2
4,
1
2
ªº
«»
«»
«»
«»
«»
¬¼
which can be normalized to get
4
.4
.8 .
.2
.4
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
u
Let
[]
1234
1/ 2 3/ 50 .4 .4
04/50 .2.8
04/50 .8.2
1/ 2 3/ 50 .4 .4
P
ªº
−−
«»
«»
==
«»
«»
«»
¬¼
uu uu
and
.75 0 0 0
0.750 0
000 0
0001.25
D
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
.
Then P orthogonally diagonalizes A, and
1
APDP
=
.
40. [M] Let
10 2 2 6 9
210 2 6 9
.
22106 9
66626 9
999919
A
ªº
«»
«»
«»
=
«»
−−−
«»
«»
¬¼
The eigenvalues of A are 8, 32, –28, and 17. For λ = 8,
one computes that a basis for the eigenspace is
11
10
,.
01
00
00
½
ª
ºª º
°
°
«
»« »
°
°
«
»« »
°
°
«
»« »
®
¾
«
»« »
°
°
«
»« »
°
°
«
»« »
°
°
¬
¼¬ ¼
¯¿
This basis may be converted via
orthogonal projection to the orthogonal basis
11
11
,.
02
00
00
½
ª
ºª º
°
°
«
»« »
°
°
«
»« »
°
°
«
»« »
®
¾
«
»« »
°
°
«
»« »
°
°
«
»« »
°
°
¬
¼¬ ¼
¯¿
These vectors can be normalized to get
1
1/ 2
1/ 2
,
0
0
0
ªº
«»
«»
«»
=«»
«»
«»
«»
¬¼
u
2
1/ 6
1/ 6
.
2/ 6
0
0
ª
º
«
»
«
»
«
»
=
«
»
«
»
«
»
«
»
¬
¼
u
For λ = 32, one computes that a basis for the eigenspace is
1
1
,
1
3
0
ªº
«»
«»
«»
«»
«»
«»
¬¼
7.1 Solutions 419
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
which can be normalized to get
3
1/ 12
1/ 12
.
1/ 12
3/ 12
0
ªº
«»
«»
«»
=«»
«»
«»
«»
¬¼
u
For λ = –28, one computes that a basis for the
eigenspace is
1
1
,
1
1
4
ªº
«»
«»
«»
«»
«»
«»
¬¼
which can be normalized to get
4
1/ 20
1/ 20
.
1/ 20
1/ 20
4/ 20
ª
º
«
»
«
»
«
»
=
«
»
«
»
«
»
«
»
¬
¼
u
For λ = 17, one computes that
a basis for the eigenspace is
1
1
,
1
1
1
ªº
«»
«»
«»
«»
«»
«»
¬¼
which can be normalized to get
5
1/ 5
1/ 5
.
1/ 5
1/ 5
1/ 5
ª
º
«
»
«
»
«
»
=
«
»
«
»
«
»
«
»
¬
¼
u
Let
[]
12345
1/ 2 1/ 6 1/ 12 1/ 20 1/ 5
1/ 2 1/ 6 1/ 12 1/ 20 1/ 5
02/6 1/12 1/201/5
003/121/201/5
00 04/201/5
P
ªº
«»
«»
«»
==
«»
«»
«»
«»
¬¼
uu u u u
and
80 0 0 0
08 0 0 0
.
0032 0 0
00 0 28 0
00 0 017
D
ªº
«»
«»
«»
=«»
«»
«»
¬¼
Then P orthogonally diagonalizes A, and
1
APDP
=
.
420 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
7.2 SOLUTIONS
Notes:
This section can provide a good conclusion to the course, because the mathematics here is widely
used in applications. For instance, Exercises 23 and 24 can be used to develop the second derivative test
for functions of two variables. However, if time permits, some interesting applications still lie ahead.
Theorem 4 is used to prove Theorem 6 in Section 7.3, which in turn is used to develop the singular value
decomposition.
1. a.
[]
2
12
12 1 122
2
51/3 5(2/3)
1/3 1
Tx
Axx x xxx
x
ªº
ªº
==++
«»
«»
¬¼
¬¼
xx
b . When
6,
1
ªº
=«»
¬¼
x
22
5(6) (2 / 3)(6)(1) (1) 185.
T
A=+ +=xx
c . When
1,
3
ªº
=«»
¬¼
x
22
5(1) (2 / 3)(1)(3) (3) 16.
T
A=+ +=xx
2. a.
[]
1
222
123 2 1 23 12 23
3
430
321 4 2 6 2
011
T
x
Axxx x x xx xx xx
x
ªºªº
«»«»
==++++
«»«»
«»«»
¬¼¬¼
xx
b . When
2
1,
5
ªº
«»
=
«»
«»
¬¼
x
222
4(2) 2( 1) (5) 6(2)( 1) 2( 1)(5) 21.
T
A=+++ +=xx
c . When
1/ 3
1/ 3 ,
1/ 3
ªº
«»
=«»
«»
«»
¬¼
x
222
4(1/ 3) 2(1/ 3) (1/ 3) 6(1/ 3)(1/ 3) 2(1/ 3)(1/ 3) 5.
T
A=+++ + =xx
3. a. The matrix of the quadratic form is
10 3 .
33
ª
º
«
»
−−
¬
¼
b . The matrix of the quadratic form is
53/2
.
3/2 0
ª
º
«
»
¬
¼
4. a. The matrix of the quadratic form is
20 15 / 2 .
15/ 2 10
ª
º
«
»
¬
¼
b . The matrix of the quadratic form is
01/2
.
1/2 0
ª
º
«
»
¬
¼
5. a. The matrix of the quadratic form is
832
37 1.
213
ª
º
«
»
−−
«
»
«
»
−−
¬
¼
7.2 Solutions 421
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b . The matrix of the quadratic form is
023
204.
340
ª
º
«
»
«
»
«
»
¬
¼
6. a. The matrix of the quadratic form is
55/2 3/2
5/2 1 0 .
3/2 0 7
ª
º
«
»
«
»
«
»
¬
¼
b . The matrix of the quadratic form is
020
202.
021
ª
º
«
»
«
»
«
»
¬
¼
7. The matrix of the quadratic form is
15
.
51
A
ª
º
=
«
»
¬
¼
The eigenvalues of A are 6 and –4. An eigenvector
for λ = 6 is
1,
1
ªº
«»
¬¼
which may be normalized to
1
1/ 2 .
1/ 2
ª
º
=
«
»
«
»
¬
¼
u
An eigenvector for λ = –4 is
1,
1
ªº
«»
¬¼
which may be normalized to
2
1/ 2 .
1/ 2
ªº
=«»
«»
¬¼
u
Then
1
APDP
=
, where
[]
12
1/ 2 1/ 2
1/ 2 1/ 2
P
ªº
==
«»
«»
¬¼
uu
and
60
.
04
D
ª
º
=
«
»
¬
¼
The desired change of variable is x = Py, and
the new quadratic form is
22
12
()() 6 4
TT TTT
APAP PAP D y y====xx y y y yyy
8. The matrix of the quadratic form is
944
470.
4011
A
ª
º
«
»
=
«
»
«
»
¬
¼
The eigenvalues of A are 3, 9, and 15. An
eigenvector for λ = 3 is
2
2,
1
ªº
«»
«»
«»
¬¼
which may be normalized to
1
2/3
2/3 .
1/3
ª
º
«
»
=
«
»
«
»
¬
¼
u An eigenvector for λ = 9
is
1
2,
2
ªº
«»
«»
«»
¬¼
which may be normalized to
2
1/3
2/3 .
2/3
ª
º
«
»
=
«
»
«
»
¬
¼
u An eigenvector for λ = 15 is
2
1,
2
ªº
«»
«»
«»
¬¼
which may
be normalized to
3
2/3
1/3 .
2/3
ª
º
«
»
=
«
»
«
»
¬
¼
u Then
1
APDP
=
, where
[]
123
2/3 1/3 2/3
2/3 2/3 1/3
1/3 2/3 2/3
P
−−
ª
º
«
»
==−−
«
»
«
»
¬
¼
uu u and
30 0
09 0.
0015
D
ª
º
«
»
=
«
»
«
»
¬
¼
The desired change of variable
is x = Py, and the new quadratic form is
422 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
22 2
12 3
()() 3 9 15
TT TTT
APAP PAP D y y y====++xx y y y yyy
9. The matrix of the quadratic form is
32
.
26
A
ª
º
=
«
»
¬
¼
The eigenvalues of A are 7 and 2, so the
quadratic form is positive definite. An eigenvector for λ = 7 is
1,
2
ª
º
«
»
¬
¼
which may be normalized to
1
1/ 5 .
2/ 5
ªº
=«»
«»
¬¼
u
An eigenvector for λ = 2 is
2,
1
ª
º
«
»
¬
¼
which may be normalized to
2
2/ 5 .
1/ 5
ªº
=«»
«»
¬¼
u
Then
1
APDP
=
, where
[]
12
1/ 5 2/ 5
2/ 5 1/ 5
P
ªº
==
«»
«»
¬¼
uu
and
70
.
02
D
ª
º
=
«
»
¬
¼
The desired change of
variable is x = Py, and the new quadratic form is
22
12
()() 7 2
TT TTT
APAP PAP D y y====+xx y y y yyy
10. The matrix of the quadratic form is
94
.
43
A
ª
º
=
«
»
¬
¼
The eigenvalues of A are 11 and 1, so the
quadratic form is positive definite. An eigenvector for λ = 11 is
2,
1
ª
º
«
»
¬
¼
which may be normalized to
1
2/ 5 .
1/ 5
ªº
=«»
«»
¬¼
u
An eigenvector for λ = 1 is
1
2
ª
º
«
»
¬
¼
, which may be normalized to
2
1/ 5 .
2/ 5
ªº
=«»
«»
¬¼
u
Then
1
APDP
=
, where
[]
12
2/ 5 1/ 5
1/ 5 2/ 5
P
ªº
==
«»
«»
¬¼
uu
and
11 0 .
01
D
ª
º
=
«
»
¬
¼
The desired change of
variable is x = Py, and the new quadratic form is
22
12
()() 11
TT TTT
APAP PAP D yy====+xx y y y yyy
11. The matrix of the quadratic form is
25
.
52
A
ª
º
=
«
»
¬
¼
The eigenvalues of A are 7 and –3, so the quadratic
form is indefinite. An eigenvector for λ = 7 is
1,
1
ª
º
«
»
¬
¼
which may be normalized to
1
1/ 2 .
1/ 2
ªº
=«»
«»
¬¼
u
An
eigenvector for λ = –3 is
1,
1
ªº
«»
¬¼
which may be normalized to
2
1/ 2 .
1/ 2
ª
º
=
«
»
«
»
¬
¼
u
Then
1
APDP
=
,
where
[]
12
1/ 2 1/ 2
1/ 2 1/ 2
P
ªº
==
«»
«»
¬¼
uu
and
70
.
03
D
ª
º
=
«
»
¬
¼
The desired change of variable is x
= Py, and the new quadratic form is
22
12
()() 7 3
TT TTT
APAP PAP D yy====xx y y y yyy
7.2 Solutions 423
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
12. The matrix of the quadratic form is
52
.
22
A
ª
º
=
«
»
¬
¼
The eigenvalues of A are –1 and –6, so the
quadratic form is negative definite. An eigenvector for λ = –1 is
1,
2
ª
º
«
»
¬
¼
which may be normalized to
1
1/ 5 .
2/ 5
ªº
=«»
«»
¬¼
u
An eigenvector for λ = –6 is
2,
1
ª
º
«
»
¬
¼
which may be normalized to
2
2/ 5 .
1/ 5
ªº
=«»
«»
¬¼
u
Then
1
APDP
=
, where
[]
12
1/ 5 2/ 5
2/ 5 1/ 5
P
ªº
==
«»
«»
¬¼
uu
and
10
06
D
ª
º
=
«
»
¬
¼
. The desired change of
variable is x = Py, and the new quadratic form is
22
12
()() 6
TT TTT
APAP PAP D y y====−−xx y y y yyy
13. The matrix of the quadratic form is
13
.
39
A
ª
º
=
«
»
¬
¼
The eigenvalues of A are 10 and 0, so the
quadratic form is positive semidefinite. An eigenvector for λ = 10 is
1,
3
ª
º
«
»
¬
¼
which may be
normalized to
1
1/ 10 .
3/ 10
ªº
=«»
«»
¬¼
u
An eigenvector for λ = 0 is
3,
1
ª
º
«
»
¬
¼
which may be normalized to
2
3/ 10 .
1/ 10
ªº
=«»
«»
¬¼
u
Then
1
APDP
=
, where
[]
12
1/ 10 3/ 10
3/ 10 1/ 10
P
ª
º
==
«
»
«
»
¬
¼
uu
and
10 0 .
00
D
ªº
=«»
¬¼
The
desired change of variable is x = Py, and the new quadratic form is
2
1
()() 10
TT TTT
APAP PAP D y====xx y y y yy y
14. The matrix of the quadratic form is
83
.
30
A
ª
º
=
«
»
¬
¼
The eigenvalues of A are 9 and –1, so the quadratic
form is indefinite. An eigenvector for λ = 9 is
3,
1
ª
º
«
»
¬
¼
which may be normalized to
1
3/ 10 .
1/ 10
ªº
=«»
«»
¬¼
u
An
eigenvector for λ = –1 is
1,
3
ªº
«»
¬¼
which may be normalized to
2
1/ 10 .
3/ 10
ª
º
=
«
»
«
»
¬
¼
u
Then
1
APDP
=
,
where
[]
12
3/ 10 1/ 10
1/ 10 3/ 10
P
ª
º
==
«
»
«
»
¬
¼
uu
and
90
.
01
D
ª
º
=
«
»
¬
¼
The desired change of variable is x =
Py, and the new quadratic form is
22
12
()() 9
TT TTT
APAP PAP D yy====xx y y y yyy
424 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
15. [M] The matrix of the quadratic form is
2222
2600
.
2093
2039
A
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
The eigenvalues of A are 0, –6, –
8, and –12, so the quadratic form is negative semidefinite. The corresponding eigenvectors may be
computed:
301 0
121 0
0: , 6: , 8: , 12 :
111 1
111 1
ªº ª º ª º ª º
«» « » « » « »
«» « » « » « »
====
«» « » « » « »
«» « » « » « »
«» « » « » « »
¬¼ ¬ ¼ ¬ ¼ ¬ ¼
These eigenvectors may be normalized to form the columns of P, and
1
APDP
=
, where
3/ 12 0 1/2 0 000 0
1/ 12 2/ 6 1/ 2 0 0 6 0 0
and 008 0
1/ 12 1/ 6 1/ 2 1/ 2
00012
1/ 12 1/ 6 1/ 2 1/ 2
PD
ªº
ª
º
«»
«
»
−−
«»
«
»
==
«»
«
»
«»
«
»
«»
«
»
¬
¼
¬¼
The desired change of variable is x = Py, and the new quadratic form is
22 2
23 4
()() 6 8 12
TT TTT
APAP PAP D yy y====−− −xx y y y yyy
16. [M] The matrix of the quadratic form is
43/2 0 2
3/2 4 2 0 .
0243/2
203/24
A
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
The eigenvalues of A are
13/2 and 3/2, so the quadratic form is positive definite. The corresponding eigenvectors may be
computed:
43 4 3
05 0 5
13/ 2 : , , 3/2: ,
34 3 4
50 5 0
½ ½
ªºªº ªºªº
°°° °
«»«» «»«»
°°° °
«»«» «»«»
==
®¾® ¾
«»«» «»«»
°°° °
«»«» «»«»
°°° °
«»«» «»«»
¬¼¬¼ ¬¼¬¼
¯¿¯ ¿
Each set of eigenvectors above is already an orthogonal set, so they may be normalized to form the
columns of P, and
1
APDP
=
, where
3/ 50 4/ 50 3/ 50 4/ 50 13 / 2 0 0 0
5/ 50 0 5/ 50 0 0 13/2 0 0
and 003/20
4/ 50 3/ 50 4/ 50 3/ 50
0003/2
05/50 05/50
PD
ªº
ª
º
«»
«
»
«»
«
»
==
«»
«
»
«»
«
»
«»
«
»
¬
¼
¬¼
The desired change of variable is x = Py, and the new quadratic form is
2222
1234
13 13 3 3
()() 2222
TT TTT
APAP PAP D y y y y====+++xx y y y yyy
7.2 Solutions 425
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
17. [M] The matrix of the quadratic form is
19/2 0 6
9/2 1 6 0 .
0619/2
609/21
A
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
The eigenvalues of A are
17/2 and –13/2, so the quadratic form is indefinite. The corresponding eigenvectors may be
computed:
43 4 3
05 0 5
17 / 2 : , , 13/ 2 : ,
34 3 4
50 5 0
½  ½
ªºªº ªºªº
°° ° °
«»«» «»«»
°° ° °
«»«» «»«»
==
®¾ ® ¾
«»«» «»«»
°° ° °
«»«» «»«»
°° ° °
«»«» «»«»
¬¼¬¼ ¬¼¬¼
¯¿ ¯ ¿
Each set of eigenvectors above is already an orthogonal set, so they may be normalized to form the
columns of P, and
1
APDP
=
, where
3/ 50 4/ 50 3/ 50 4/ 50 17 / 2 0 0 0
5/ 50 0 5/ 50 0 0 17/2 0 0
and 0013/2 0
4/ 50 3/ 50 4/ 50 3/ 50
00 013/2
05/50 05/50
PD
ªº
ª
º
«»
«
»
«»
«
»
==
«»
«
»
«»
«
»
«»
«
»
¬
¼
¬¼
The desired change of variable is x = Py, and the new quadratic form is
2222
1234
17 17 13 13
()() 2222
TT TTT
APAP PAP D y y y y====+−−xx y y y yyy
18. [M] The matrix of the quadratic form is
11 6 6 6
6100
.
6001
6010
A
−−−
ª
º
«
»
−−
«
»
=
«
»
−−
«
»
−−
«
»
¬
¼
The eigenvalues of A are 17, 1, –
1, and –7, so the quadratic form is indefinite. The corresponding eigenvectors may be computed:
30 0 1
10 2 1
17 : , 1: , 1: , 7:
11 11
11 11
ªº ªº ªº ªº
«» «» «» «»
«» «» «» «»
====
«» «» «» «»
«» «» «» «»
«» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼
These eigenvectors may be normalized to form the columns of P, and
1
APDP
=
, where
3/ 12 0 0 1/2 17 0 0 0
1/ 12 0 2/ 6 1/ 2 0 1 0 0
and 00 1 0
1/ 12 1/ 2 1/ 6 1/2
00 0 7
1/ 12 1/ 2 1/ 6 1/2
PD
ªº
ª
º
«»
«
»
«»
«
»
==
«»
«
»
«»
«
»
«»
¬
¼
¬¼
The desired change of variable is x = Py, and the new quadratic form is
222 2
123 4
()() 17 7
TT TTT
APAP PAP D yyy y====+−−xx y y y yyy
426 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
19. Since 8 is larger than 5, the
2
2
x
term should be as large as possible. Since
22
12
1xx+=
, the largest
value that
2
x
can take is 1, and
10x=
when
21x=
. Thus the largest value the quadratic form can
take when
1
T
=
xx
is 5(0) + 8(1) = 8.
20. Since 5 is larger in absolute value than –3, the
2
1
x
term should be as large as possible. Since
22
12
1xx+=
, the largest value that
1
x
can take is 1, and
20x=
when
11x=
. Thus the largest value
the quadratic form can take when
1
T
=
xx
is 5(1) – 3(0) = 5.
21. a. True. See the definition before Example 1, even though a nonsymmetric matrix could be used to
compute values of a quadratic form.
b . True. See the paragraph following Example 3.
c . True. The columns of P in Theorem 4 are eigenvectors of A. See the Diagonalization Theorem in
Section 5.3.
d . False. Q(x) = 0 when x = 0.
e . True. See Theorem 5(a).
f. True. See the Numerical Note after Example 6.
22. a. True. See the paragraph before Example 1.
b . False. The matrix P must be orthogonal and make
T
PAP
diagonal. See the paragraph before
Example 4.
c . False. There are also “degenerate” cases: a single point, two intersecting lines, or no points at all.
See the subsection “A Geometric View of Principal Axes.”
d . False. See the definition before Theorem 5.
e . True. See Theorem 5(b). If
T
A
xx
has only negative values for x 0, then
T
A
xx
is negative
definite.
23. The characteristic polynomial of A may be written in two ways:
22
det( )det ()
ab
AI ad adb
bd
ªº
==++
«»
¬¼
and
2
12 1212
()( ) ( )−−=++
The coefficients in these polynomials may be equated to obtain
12
ad+=+
and
12
=
2
detad b A=
.
24. If det A > 0, then by Exercise 23,
12 0>
, so that
1
and
2
have the same sign; also,
2
det 0ad A b=+>
.
a . If det A > 0 and a > 0, then d > 0 also, since ad > 0. By Exercise 23,
12 0ad+=+>
. Since
1
and
2
have the same sign, they are both positive. So Q is positive definite by Theorem 5.
b . If det A > 0 and a < 0, then d < 0 also, since ad > 0. By Exercise 23,
12 0ad+=+<
. Since
1
and
2
have the same sign, they are both negative. So Q is negative definite by Theorem 5.
c . If det A < 0, then by Exercise 23,
12 0<
. Thus
1
and
2
have opposite signs. So Q is
indefinite by Theorem 5.
7.3 Solutions 427
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
25. Exercise 27 in Section 7.1 showed that
T
BB
is symmetric. Also
() || ||0
TT T
BB B B B==xxxxx
, so
the quadratic form is positive semidefinite, and the matrix
T
BB
is positive semidefinite. Suppose
that B is square and invertible. Then if
0,
TT
BB =xx
|| Bx || = 0 and Bx = 0. Since B is invertible, x =
0. Thus if x 0,
0
TT
BB >xx
and
T
BB
is positive definite.
26. Let
,
T
APDP=
where
1.
T
PP
=
The eigenvalues of A are all positive: denote them
1,,.
n
Let C
be the diagonal matrix with
1
,,
n
on its diagonal. Then
2T
DC CC==
. If
T
BPCP=
, then B
is positive definite because its eigenvalues are the positive numbers on the diagonal of C. Also
()()( )()
TTTTTTTTTTTT
BB PCP PCP P CP PCP PCCP PDP A== ===
since
.
T
PP I=
27. Since the eigenvalues of A and B are all positive, the quadratic forms
T
Axx
and
T
Bxx
are positive
definite by Theorem 5. Let x 0. Then
0
TA>xx
and
0
TB>xx
, so
() 0
TTT
AB A B+= + >xxxxxx
,
and the quadratic form
()
T
AB+xx
is positive definite. Note that A + B is also a symmetric matrix.
Thus by Theorem 5 all the eigenvalues of A + B must be positive.
28. The eigenvalues of A are all positive by Theorem 5. Since the eigenvalues of
1
A
are the reciprocals
of the eigenvalues of A (see Exercise 25 in Section 5.1), the eigenvalues of
1
A
are all positive. Note
that
1
A
is also a symmetric matrix. By Theorem 5, the quadratic form
1T
A
xx
is positive definite.
7.3 SOLUTIONS
Notes:
Theorem 6 is the main result needed in the next two sections. Theorem 7 is mentioned in Example
2 of Section 7.4. Theorem 8 is needed at the very end of Section 7.5. The economic principles in Example
6 may be familiar to students who have had a course in macroeconomics.
1. The matrix of the quadratic form on the left is
520
262.
027
A
ª
º
«
»
=
«
»
«
»
¬
¼
The equality of the quadratic
forms implies that the eigenvalues of A are 9, 6, and 3. An eigenvector may be calculated for each
eigenvalue and normalized:
1/3 2/3 2/3
9: 2/3 , 6: 1/3 , 3: 2/3
2/3 1/3 1/3
ªº ªº ªº
«» «» «»
===
«» «» «»
«» «» «»
¬¼ ¬¼ ¬¼
The desired change of variable is x = Py, where
1/3 2/3 2/3
2/3 1/3 2/3 .
2/3 2/3 1/3
P
ª
º
«
»
=
«
»
«
»
¬
¼
428 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
2. The matrix of the quadratic form on the left is
311
122.
122
A
ª
º
«
»
=
«
»
«
»
¬
¼
The equality of the quadratic forms
implies that the eigenvalues of A are 5, 2, and 0. An eigenvector may be calculated for each
eigenvalue and normalized:
1/ 3 2/ 6 0
5: 1/ 3 , 2: 1/ 6 , 0: 1/ 2
1/ 3 1/ 6 1/ 2
ªº ª º
ª
º
«» « »
«
»
== =
«» « »
«
»
«» « »
«
»
«» « »
¬
¼
¬¼ ¬ ¼
The desired change of variable is x = Py, where
1/ 3 2/ 6 0
1/ 3 1/ 6 1/ 2 .
1/ 3 1/ 6 1/ 2
P
ª
º
«
»
=
«
»
«
»
«
»
¬
¼
3. (a) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
is the greatest
eigenvalue
1
of A. By Exercise 1,
19.=
(b) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
occurs at a unit
eigenvector u corresponding to the greatest eigenvalue
1
of A. By Exercise 1,
1/3
2/3 .
2/3
ªº
«»
«»
«»
¬¼
u
(c) By Theorem 7, the maximum value of
T
Axx
subject to the constraints
1
T
=xx
and
0
T=xu
is
the second greatest eigenvalue
2
of A. By Exercise 1,
26.=
4. (a) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
is the greatest
eigenvalue
1
of A. By Exercise 2,
15.=
(b) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
occurs at a unit
eigenvector u corresponding to the greatest eigenvalue
1
of A. By Exercise 2,
1/ 3
1/ 3 .
1/ 3
ªº
«»
«»
«»
«»
¬¼
u
(c) By Theorem 7, the maximum value of
T
Axx
subject to the constraints
1
T
=xx
and
0
T=xu
is
the second greatest eigenvalue
2
of A. By Exercise 2,
22.=
5. The matrix of the quadratic form is
52
.
25
A
ª
º
=
«
»
¬
¼
The eigenvalues of A are
17=
and
23.=
(a) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
is the greatest
eigenvalue
1
of A, which is 7.
7.3 Solutions 429
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
(b) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
occurs at a unit
eigenvector u corresponding to the greatest eigenvalue
1
of A. One may compute that
1
1
ªº
«»
¬¼
is
an eigenvector corresponding to
17,=
so
1/ 2 .
1/ 2
ª
º
«
»
«
»
¬
¼
u
(c) By Theorem 7, the maximum value of
T
Axx
subject to the constraints
1
T
=xx
and
0
T=xu
is
the second greatest eigenvalue
2
of A, which is 3.
6. The matrix of the quadratic form is
73/2
.
3/2 3
A
ª
º
=
«
»
¬
¼
The eigenvalues of A are
115/ 2=
and
25/2.=
(a) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
is the greatest
eigenvalue
1
of A, which is 15/2.
(b) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
occurs at a unit
eigenvector u corresponding to the greatest eigenvalue
1
of A. One may compute that
3
1
ªº
«»
¬¼
is an
eigenvector corresponding to
115 / 2,=
so
3/ 10 .
1/ 10
ª
º
«
»
«
»
¬
¼
u
(c) By Theorem 7, the maximum value of
T
Axx
subject to the constraints
1
T
=xx
and
0
T=xu
is
the second greatest eigenvalue
2
of A, which is 5/2.
7. The eigenvalues of the matrix of the quadratic form are
12,=
21,=
and
34.=
By Theorem
6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
occurs at a unit eigenvector u
corresponding to the greatest eigenvalue
1
of A. One may compute that
1/2
1
1
ª
º
«
»
«
»
«
»
¬
¼
is an eigenvector
corresponding to
12,=
so
1/3
2/3 .
2/3
ªº
«»
«»
«»
¬¼
u
8. The eigenvalues of the matrix of the quadratic form are
19,=
and
23.=
By Theorem 6, the
maximum value of
T
Axx
subject to the constraint
1
T
=xx
occurs at a unit eigenvector u
corresponding to the greatest eigenvalue
1
of A. One may compute that
1
0
1
ª
º
«
»
«
»
«
»
¬
¼
and
2
1
0
ªº
«»
«»
«»
¬¼
are linearly
independent eigenvectors corresponding to
19,=
so u can be any unit vector which is a linear
combination of
1
0
1
ªº
«»
«»
«»
¬¼
and
2
1.
0
ªº
«»
«»
«»
¬¼
Alternatively, u can be any unit vector which is orthogonal to the
430 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
eigenspace corresponding to the eigenvalue
23.=
Since multiples of
1
2
1
ª
º
«
»
«
»
«
»
¬
¼
are eigenvectors
corresponding to
23,=
u can be any unit vector orthogonal to
1
2.
1
ª
º
«
»
«
»
«
»
¬
¼
9. This is equivalent to finding the maximum value of
T
Axx
subject to the constraint
1.
T=xx
By
Theorem 6, this value is the greatest eigenvalue
1
of the matrix of the quadratic form. The matrix of
the quadratic form is
71
,
13
A
ªº
=«»
¬¼
and the eigenvalues of A are
1
55,=+
2
55.=
Thus
the desired constrained maximum value is
1
55.=+
10. This is equivalent to finding the maximum value of
T
Axx
subject to the constraint
1
T
=xx
. By
Theorem 6, this value is the greatest eigenvalue
1
of the matrix of the quadratic form. The matrix of
the quadratic form is
31
,
15
A
−−
ªº
=«»
¬¼
and the eigenvalues of A are
1
117,=+
2
117.=
Thus
the desired constrained maximum value is
1
117.=+
11. Since x is an eigenvector of A corresponding to the eigenvalue 3, Ax = 3x, and
(3 )
TT
A==xxx x
2
3( ) 3 || || 3
T
==xx x
since x is a unit vector.
12. Let x be a unit eigenvector for the eigenvalue λ. Then
( ) ()
TT T
A===xxx x xx
since
1
T
=xx
.
So λ must satisfy m λ M.
13. If m = M, then let t = (1 – 0)m + 0M = m and
.
n
=xu
Theorem 6 shows that
.
T
nn
Am=uu
Now
suppose that m < M, and let t be between m and M. Then 0 t m M m and 0 (t m)/(M m)
1. Let
α = (t m)/(M m), and let
1
1.
n
αα
=+xuu
The vectors
1
n
α
u
and
1
α
u
are orthogonal
because they are eigenvectors for different eigenvectors (or one of them is 0). By the Pythagorean
Theorem
22222
11
|| || || 1 || || || |1 ||| || | ||| || (1 ) 1
T
nn
αααα αα
==+=+=+=xx x u u u u
since
n
u
and
1
u
are unit vectors and 0 α 1. Also, since
n
u
and
1
u
are orthogonal,
11
(1 ) (1 )
TT
nn
AA
αα αα
=++xx u u u u
11
(1 )( 1 )
T
nn
mM
αα α α
=++uu u u
11
|1 | | | (1 )
TT
nn
mM mMt
αα αα
=+=+=uu uu
Thus the quadratic form
T
Axx
assumes every value between m and M for a suitable unit vector x.
7.3 Solutions 431
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
14. [M] The matrix of the quadratic form is
01/23/2 15
1/ 2 0 15 3/ 2 .
3/2 15 0 1/2
15 3/ 2 1/ 2 0
A
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
The eigenvalues of A are
117,=
213,=
314,=
and
416.=
(a) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
is the greatest
eigenvalue
1
of A, which is 17.
(b) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
occurs at a unit
eigenvector u corresponding to the greatest eigenvalue
1
of A. One may compute that
1
1
1
1
ªº
«»
«»
«»
«»
«»
¬¼
is an
eigenvector corresponding to
117,=
so
1/2
1/2 .
1/2
1/2
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
u
(c) By Theorem 7, the maximum value of
T
Axx
subject to the constraints
1
T
=xx
and
0
T=xu
is
the second greatest eigenvalue
2
of A, which is 13.
15. [M] The matrix of the quadratic form is
03/25/27/2
3/2 0 7/2 5/2 .
5/2 7/2 0 3/2
7/2 5/2 3/2 0
A
ª
º
«
»
«
»
=
«
»
«
»
«
»
¬
¼
The eigenvalues of A are
115 / 2,=
21/ 2,=
35/2,=
and
49/2.=
(a) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
is the greatest
eigenvalue
1
of A, which is 15/2.
(b) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
occurs at a unit
eigenvector u corresponding to the greatest eigenvalue
1
of A. One may compute that
1
1
1
1
ªº
«»
«»
«»
«»
«»
¬¼
is an
eigenvector corresponding to
115 / 2,=
so
1/2
1/2 .
1/2
1/2
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
u
(c) By Theorem 7, the maximum value of
T
Axx
subject to the constraints
1
T
=xx
and
0
T=xu
is
the second greatest eigenvalue
2
of A, which is –1/2.
432 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
16. [M] The matrix of the quadratic form is
4355
3033
.
5301
5310
A
−−−
ª
º
«
»
−−
«
»
=
«
»
−− −
«
»
−− −
«
»
¬
¼
The eigenvalues of A are
19,=
23,=
31,=
and
49.=
(a) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
is the greatest
eigenvalue
1
of A, which is 9.
(b) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
occurs at a unit
eigenvector u corresponding to the greatest eigenvalue
1
of A. One may compute that
2
0
1
1
ªº
«»
«»
«»
«»
«»
¬¼
is
an eigenvector corresponding to
19,=
so
2/ 6
0.
1/ 6
1/ 6
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
u
(c) By Theorem 7, the maximum value of
T
Axx
subject to the constraints
1
T
=xx
and
0
T=xu
is the
second greatest eigenvalue
2
of A, which is 3.
17. [M] The matrix of the quadratic form is
6222
210 0 0
.
20133
20313
A
−−−−
ª
º
«
»
−−
«
»
=
«
»
−−
«
»
−−
«
»
¬
¼
The eigenvalues of A are
14,=
210,=
312,=
and
416.=
(a) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
is the greatest
eigenvalue
1
of A, which is –4.
(b) By Theorem 6, the maximum value of
T
Axx
subject to the constraint
1
T
=xx
occurs at a unit
eigenvector u corresponding to the greatest eigenvalue
1
of A. One may compute that
3
1
1
1
ªº
«»
«»
«»
«»
«»
¬¼
is
an eigenvector corresponding to
14,=
so
3/ 12
1/ 12 .
1/ 12
1/ 12
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
u
(c) By Theorem 7, the maximum value of
T
Axx
subject to the constraints
1
T
=xx
and
0
T=xu
is
the second greatest eigenvalue
2
of A, which is –10.
7.4 Solutions 433
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
7.4 SOLUTIONS
Notes:
The section presents a modern topic of great importance in applications, particularly in computer
calculations. An understanding of the singular value decomposition is essential for advanced work in
science and engineering that requires matrix computations. Moreover, the singular value decomposition
explains much about the structure of matrix transformations. The SVD does for an arbitrary matrix almost
what an orthogonal decomposition does for a symmetric matrix.
1. Let
10
.
03
Aªº
=«»
¬¼
Then
10
,
09
T
AA ªº
=«»
¬¼
and the eigenvalues of
T
AA
are seen to be (in decreasing
order)
19=
and
21.=
Thus the singular values of A are
1
93
σ
==
and
2
11.
σ
==
2. Let
50
.
00
A
ªº
=«»
¬¼
Then
25 0 ,
00
T
AA ªº
=«»
¬¼
and the eigenvalues of
T
AA
are seen to be (in decreasing
order)
125=
and
20.=
Thus the singular values of A are
1
25 5
σ
==
and
2
00.
σ
==
3. Let
61
.
06
A
ªº
=«»
«»
¬¼
Then
66
,
67
T
AA
ªº
=«»
«»
¬¼
and the characteristic polynomial of
T
AA
is
2
13 36 ( 9)( 4),+=−−
and the eigenvalues of
T
AA
are (in decreasing order)
19=
and
24.=
Thus the singular values of A are
1
93
σ
==
and
2
42.
σ
==
4. Let
32
.
03
A
ªº
=«»
«»
¬¼
Then
323
,
23 7
T
AA
ªº
=«»
«»
¬¼
and the characteristic polynomial of
T
AA
is
2
10 9( 9)( 1),+= −−
and the eigenvalues of
T
AA
are (in decreasing order)
19=
and
21.=
Thus the singular values of A are
1
93
σ
==
and
2
11.
σ
==
5. Let
30
.
00
A
ªº
=«»
¬¼
Then
90
,
00
T
AA ªº
=«»
¬¼
and the eigenvalues of
T
AA
are seen to be (in decreasing
order)
19=
and
20.=
Associated unit eigenvectors may be computed:
10
9: , 0:
01
ªº ªº
==
«» «»
¬¼ ¬¼
Thus one choice for V is
10
.
01
Vªº
=«»
¬¼
The singular values of A are
1
93
σ
==
and
2
00.
σ
==
Thus the matrix Σ is
30
.
00
ªº
Σ=«»
¬¼
Next compute
11
1
1
1
0
A
σ
ªº
==
«»
¬¼
uv
Because Av
2
= 0, the only column found for U so far is u
1
. Find the other column of U is found by
extending {u
1
} to an orthonormal basis for
2
. An easy choice is u
2
=
0.
1
ª
º
«
»
¬
¼
434 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Let
10
.
01
U
ªº
=«»
¬¼
Thus
103010
010001
T
AU V
ªºªºªº
=Σ=«»«»«»
¬¼¬¼¬¼
6. Let
20
.
01
A
ªº
=«»
¬¼
Then
40
,
01
T
AA ªº
=«»
¬¼
and the eigenvalues of
T
AA
are seen to be (in decreasing
order)
14=
and
21.=
Associated unit eigenvectors may be computed:
10
4: , 1:
01
ªº ªº
==
«» «»
¬¼ ¬¼
Thus one choice for V is
10
.
01
Vªº
=«»
¬¼
The singular values of A are
1
42
σ
==
and
2
11.
σ
==
Thus the matrix Σ is
20
.
01
ªº
Σ=«»
¬¼
Next compute
11 2 2
12
10
11
,
01
AA
σσ
ªº ªº
== = =
«» «»
¬¼ ¬¼
uv u v
Since
12
{, }uu
is a basis for
2
, let
10
.
01
U
ª
º
=
«
»
¬
¼
Thus
102010
010101
T
AU V
ªºªºªº
=Σ=«»«»«»
¬¼¬¼¬¼
7. Let
21
.
22
A
ªº
=«»
¬¼
Then
82
,
25
T
AA ªº
=«»
¬¼
and the characteristic polynomial of
T
AA
is
2
13 36 ( 9)( 4),+=−−
and the eigenvalues of
T
AA
are (in decreasing order)
19=
and
24.=
Associated unit eigenvectors may be computed:
2/ 5 1/ 5
9: , 4:
1/ 5 2/ 5
ªº ª º
==
«» « »
«» « »
¬¼ ¬ ¼
Thus one choice for V is
2/ 5 1/ 5 .
1/ 5 2/ 5
V
ªº
=«»
«»
¬¼
The singular values of A are
1
93
σ
==
and
2
42.
σ
==
Thus the matrix Σ is
30
.
02
ª
º
Σ=
«
»
¬
¼
Next compute
11 2 2
12
1/ 5 2/ 5
11
,
2/ 5 1/ 5
AA
σσ
ªº ª º
== = =
«» « »
«» « »
¬¼ ¬ ¼
uv u v
Since
12
{, }uu
is a basis for
2
, let
1/ 5 2/ 5 .
2/ 5 1/ 5
U
ª
º
=
«
»
«
»
¬
¼
Thus
7.4 Solutions 435
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1/ 5 2/ 5 3 0 2/ 5 1/ 5
02
2/ 5 1/ 5 1/ 5 2/ 5
T
AU V
ªºªº
ªº
=Σ=«»«»
«»
¬¼
«»«»
¬¼¬¼
8. Let
23
.
02
Aªº
=«»
¬¼
Then
46
,
613
T
AA ªº
=«»
¬¼
and the characteristic polynomial of
T
AA
is
2
17 16 ( 16)( 1),+=−−
and the eigenvalues of
T
AA
are (in decreasing order)
116=
and
21.=
Associated unit eigenvectors may be computed:
1/ 5 2/ 5
16 : , 1:
2/ 5 1/ 5
ªº ª º
==
«» « »
«» « »
¬¼ ¬ ¼
Thus one choice for V is
1/ 5 2/ 5 .
2/ 5 1/ 5
V
ªº
=«»
«»
¬¼
The singular values of A are
1
16 4
σ
==
and
2
11.
σ
==
Thus the matrix Σ is
40
.
01
ª
º
Σ=
«
»
¬
¼
Next compute
11 2 2
12
2/ 5 1/ 5
11
,
1/ 5 2/ 5
AA
σσ
ªº ª º
== = =
«» « »
«» « »
¬¼ ¬ ¼
uv u v
Since
12
{, }uu
is a basis for
2
, let
2/ 5 1/ 5 .
1/ 5 2/ 5
U
ª
º
=
«
»
«
»
¬
¼
Thus
2/ 5 1/ 5 4 0 1/ 5 2/ 5
01
1/ 5 2/ 5 2/ 5 1/ 5
T
AU V
ªºªº
ªº
=Σ=«»«»
«»
¬¼
«»«»
¬¼¬¼
9. Let
71
00.
55
A
ªº
«»
=«»
«»
¬¼
Then 74 32 ,
32 26
T
AA
ªº
=«»
¬¼
and the characteristic polynomial of
T
AA
is
2
100 900 ( 90)( 10),+=−−
and the eigenvalues of
T
AA
are (in decreasing order)
190=
and
210.=
Associated unit eigenvectors may be computed:
2/ 5 1/ 5
90 : , 10 :
1/ 5 2/ 5
ªº ª º
==
«» « »
«» « »
¬¼ ¬ ¼
Thus one choice for V is
2/ 5 1/ 5 .
1/ 5 2/ 5
V
ªº
=«»
«»
¬¼
The singular values of A are
1
90 3 10
σ
==
and
2
10.
σ
=
Thus the matrix Σ is
310 0
010.
00
ªº
«»
Σ=«»
«»
«»
¬¼
Next compute
436 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
11 2 2
12
1/ 2 1/ 2
11
0, 0
1/ 2 1/ 2
AA
σσ
ªº ª º
«» « »
== = =
«» « »
«» « »
¬¼ ¬ ¼
uv u v
Since
12
{, }uu
is not a basis for
3
, we need a unit vector
3
u
that is orthogonal to both
1
u
and
2.u
The vector
3
u
must satisfy the set of equations
1
0
T
=ux
and
2
0.
T
=ux
These are equivalent to the
linear equations
123
3
123
00
00
,so 1 ,and 1
00
00
xxx
xxx
ªº ªº
++ = «» «»
==
«» «»
++ = «» «»
¬¼ ¬¼
xu
Therefore let
1/ 2 1/ 2 0
001
1/ 2 1/ 2 0
U
ªº
«»
=«»
«»
¬¼
. Thus
310 0
1/ 2 1/ 2 0 2/ 5 1/ 5
001010
1/ 5 2/ 5
00
1/ 2 1/ 2 0
T
AU V
ªº
ªº
«»
ª
º
«»
=Σ=«»
«
»
«»
«»
«
»
¬
¼
«»
«»
¬¼
¬¼
10. Let
42
21.
00
A
ªº
«»
=
«»
«»
¬¼
Then
20 10 ,
10 5
T
AA
ªº
=«»
¬¼
and the characteristic polynomial of
T
AA
is
2
25 (25)=
, and the eigenvalues of
T
AA
are (in decreasing order)
125=
and
20.=
Associated unit eigenvectors may be computed:
2/ 5 1/ 5
25 : , 0:
1/ 5 2/ 5
ªº ªº
==
«» «»
«» «»
¬¼ ¬¼
Thus one choice for V is
2/ 5 1/ 5 .
1/ 5 2/ 5
V
ªº
=«»
«»
¬¼
The singular values of A are
1
25 5
σ
==
and
2
00.
σ
==
Thus the matrix Σ is
50
00.
00
ª
º
«
»
Σ=
«
»
«
»
¬
¼
Next compute
11
1
2/ 5
11/ 5
0
A
σ
ªº
«»
==
«»
«»
«»
¬¼
uv
Because Av
2
= 0, the only column found for U so far is u
1
. Find the other columns of U found by
extending {u
1
} to an orthonormal basis for
3
. In this case, we need two orthogonal unit vectors u
2
and u
3
that are orthogonal to u
1
. Each vector must satisfy the equation
1
0,
T
=ux
which is equivalent
to the equation 2x
1
+ x
2
= 0. An orthonormal basis for the solution set of this equation is
7.4 Solutions 437
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
23
1/ 5 0
2/ 5 , 0 .
01
ªº
ªº
«»
«»
==
«»
«»
«»
«»
¬¼
«»
¬¼
uu
Therefore, let
2/ 5 1/ 5 0
1/ 5 2/ 5 0 .
001
U
ªº
«»
=
«»
«»
«»
¬¼
Thus
2/ 5 1/ 5 0 502/ 5 1/ 5
1/ 5 2/ 5 0 0 0
1/ 5 2/ 5
00100
T
AU V
ªº
ªº
«»
ª
º
«»
=Σ=
«»
«
»
«»
«»
«
»
¬
¼
«»
¬¼
«»
¬¼
11. Let
31
62.
62
A
ªº
«»
=
«»
«»
¬¼
Then
81 27 ,
27 9
T
AA
ªº
=«»
¬¼
and the characteristic polynomial of
T
AA
is
2
90 (90),=
and the eigenvalues of
T
AA
are (in decreasing order)
190=
and
20.=
Associated unit eigenvectors may be computed:
3/ 10 1/ 10
90 : , 0: .
1/ 10 3/ 10
ªºªº
==
«»«»
«»«»
¬¼¬¼
Thus one choice for V is
3/ 10 1/ 10 .
1/ 10 3/ 10
V
ªº
=«»
«»
¬¼
The singular values of A are
1
90 3 10
σ
==
and
2
00.
σ
==
Thus the matrix Σ is
310 0
00.
00
ª
º
«
»
Σ=
«
»
«
»
¬
¼
Next compute
11
1
1/3
12/3
2/3
A
σ
ªº
«»
==
«»
«»
¬¼
uv
Because Av
2
= 0, the only column found for U so far is u
1
. The other columns of U can be found by
extending {u
1
} to an orthonormal basis for
3
. In this case, we need two orthogonal unit vectors u
2
and u
3
that are orthogonal to u
1
. Each vector must satisfy the equation
1
0,
T
=ux
which is equivalent
to the equation
123
220.xx x++=
An orthonormal basis for the solution set of this equation is
23
2/3 2/3
1/3 , 2/3 .
2/3 1/3
ªº ªº
«» «»
==
«» «»
«» «»
¬¼ ¬¼
uu
Therefore, let
1/3 2/3 2/3
2/3 1/3 2/3 .
2/3 2/3 1/3
U
ªº
«»
=
«»
«»
¬¼
Thus
438 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1/3 2/3 2/3 3 10 0 3/ 10 1/ 10
2/3 1/3 2/3 0 0
1/ 10 3/ 10
2/3 2/3 1/3 0 0
T
AU V
ªº
ªº
ª
º
«»
«»
=Σ=
«
»
«»
«»
«
»
¬
¼
«»
«»
¬¼
¬¼
12. Let
11
01.
11
A
ªº
«»
=«»
«»
¬¼
Then
20
,
03
T
AA ªº
=«»
¬¼
and the eigenvalues of
T
AA
are seen to be (in decreasing
order)
13=
and
22.=
Associated unit eigenvectors may be computed:
01
3: , 2:
10
ªº ªº
==
«» «»
¬¼ ¬¼
Thus one choice for V is
01
.
10
Vªº
=«»
¬¼
The singular values of A are
1
3
σ
=
and
2
2.
σ
=
Thus the
matrix Σ is
30
02.
00
ªº
«»
Σ=«»
«»
«»
¬¼
Next compute
11 2 2
12
1/ 3 1/ 2
11
1/ 3 , 0
1/ 3 1/ 2
AA
σσ
ªº
ª
º
«»
«
»
== = =
«»
«
»
«»
«
»
«»
¬
¼
¬¼
uv u v
Since
12
{, }uu
is not a basis for
3
, we need a unit vector
3
u
that is orthogonal to both
1
u
and
2.u
The vector
3
u
must satisfy the set of equations
1
0
T
=ux
and
2
0.
T
=ux
These are equivalent to the
linear equations
123
3
123
1/ 6
1
0,so 2 ,and 2/ 6
00
11/ 6
xx x
xxx
ª
º
ªº
«
»
++ = «»
==
«
»
«»
+=
«
»
«»
¬¼
«
»
¬
¼
xu
. Therefore let
1/ 3 1/ 2 1/ 6
1/ 3 0 2/ 6 .
1/ 3 1/ 2 1/ 6
U
ª
º
«
»
=
«
»
«
»
«
»
¬
¼
Thus
1/ 3 1/ 2 1/ 6 3 0
01
1/ 3 0 2/ 6 0 2 10
00
1/ 3 1/ 2 1/ 6
T
AU V
ªºªº
«»«»
ª
º
=Σ=
«»«»
«
»
¬
¼«»«»
«»«»
¬¼¬¼
13. Let
32 2
.
23 2
Aªº
=«»
¬¼
Then
32
23,
22
T
A
ªº
«»
=«»
«»
¬¼
17 8 ,
817
TT T T
AA AA
ª
º
==
«
»
¬
¼ and the eigenvalues of
TT T
AA
are seen to be (in decreasing order)
125=
and
29.=
Associated unit eigenvectors may
be computed:
7.4 Solutions 439
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1/ 2 1/ 2
25: , 9:
1/ 2 1/ 2
ªº ª º
==
«» « »
«» « »
¬¼ ¬ ¼
Thus one choice for V is
1/ 2 1/ 2 .
1/ 2 1/ 2
V
ªº
=«»
«»
¬¼
The singular values of
T
A
are
1
25 5
σ
==
and
2
93.
σ
==
Thus the matrix Σ is
50
03.
00
ª
º
«
»
Σ=
«
»
«
»
¬
¼
Next compute
11 2 2
12
1/ 2 1/ 18
11
1/ 2 , 1/ 18
04/ 18
TT
AA
σσ
ª
ºªº
«
»«»
== = =
«
»«»
«
»«»
«
»«»
¬
¼¬¼
uv u v
Since
12
{, }uu
is not a basis for
3
, we need a unit vector
3
u
that is orthogonal to both
1
u
and
2.u
The vector
3
u
must satisfy the set of equations
1
0
T
=ux
and
2
0.
T
=ux
These are equivalent to the
linear equations
12 3
3
12 3
22/3
00,so 2 ,and 2/3
4011/3
xx x
xx x
−−
ªº ª º
++ =«» « »
==
«» « »
+=«» « »
¬¼ ¬ ¼
xu
Therefore let
1/ 2 1/ 18 2/3
1/ 2 1/ 18 2/3 .
04/18 1/3
U
ªº
−−
«»
=«»
«»
«»
¬¼
Thus
1/ 2 1/ 18 2/3 501/ 2 1/ 2
1/ 2 1/ 18 2/3 0 3 1/ 2 1/ 2
00
04/181/3
TT
AUV
ªº
−−
ªº
«»
ª
º
«»
=Σ=«»
«
»
«»
«»
«
»
¬
¼
«»
¬¼
«»
¬¼
An SVD for A is computed by taking transposes:
1/ 2 1/ 2 0
1/ 2 1/ 2 5 0 0 1/ 18 1/ 18 4/ 18
030
1/ 2 1/ 2 2/3 2/3 1/3
A
ª
º
«
»
ªº
ªº
=−−
«
»
«»
«»
¬¼
«
»
«»
¬¼
«
»
¬
¼
14. From Exercise 7,
T
AUV=Σ
with
2/ 5 1/ 5 .
1/ 5 2/ 5
V
ª
º
=
«
»
«
»
¬
¼
Since the first column of V is unit
eigenvector associated with the greatest eigenvalue
1
of
,
T
AA
so the first column of V is a unit
vector at which || Ax || is maximized.
15. a. Since A has 2 nonzero singular values, rank A = 2.
440 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
b . By Example 6,
12
.40 .78
{, } .37, .33
.84 .52
½
ªºªº
°°
«»«»
=
®¾
«»«»
°°
«»«»
−−
¬¼¬¼
¯¿
uu
is a basis for Col A and
3
.58
{} .58
.58
½
ªº
°°
«»
=
®¾
«»
°°
«»
¬¼
¯¿
v
is a basis
for Nul A.
16. a. Since A has 2 nonzero singular values, rank A = 2.
b . By Example 6,
12
.86 .11
{, } .31, .68
.41 .73
½−−
ªºªº
°°
«»«»
=®¾
«»«»
°°
«»«»
¬¼¬¼
¯¿
uu
is a basis for Col A and
34
.65 .34
.08 .42
{, } ,
.16 .84
.73 .08
½
ªºªº
°°
«»«»
°°
«»«»
=®¾
«»«»
−−
°°
«»«»
°°
−−
«»«»
¬¼¬¼
¯¿
vv
is a basis for Nul A.
17. Let
1
.
T
AUV UV
=Σ=Σ
Since A is square and invertible, rank A = n, and all of the entries on the
diagonal of Σ must be nonzero. So
111111
() .
T
AUV VUVU
−−
=Σ=Σ=Σ
18. First note that the determinant of an orthogonal matrix is ±1, because
1det det
T
IUU== =
2
(det )(det ) (det ) .
T
UU U=
Suppose that A is square and
.
T
AUV=Σ
Then Σ is square, and
1
det (det )(det )(det ) det
T
n
AU V
σσ
=ΣΣ=± …
.
19. Since U and V are orthogonal matrices,
1
() () ()
TTTTTTTTTT
AA U V U V V UU V V V V V
=ΣΣ=ΣΣ=ΣΣ =ΣΣ
If
1,,
r
σσ
are the diagonal entries in Σ, then
T
ΣΣ
is a diagonal matrix with diagonal entries
22
1
,,
r
σσ
and possibly some zeros. Thus V diagonalizes
T
AA
and the columns of V are
eigenvectors of
T
AA
by the Diagonalization Theorem in Section 5.3. Likewise
1
() () ()
TTTTTTT TT T
AA U V U V U V V U U U U U
=ΣΣ=ΣΣ =ΣΣ =ΣΣ
so U diagonalizes
T
AA
and the columns of U must be eigenvectors of
T
AA
. Moreover, the
Diagonalization Theorem states that
22
1
,,
r
σσ
are the nonzero eigenvalues of
T
AA
. Hence
1,,
r
σσ
are the nonzero singular values of A.
20. If A is positive definite, then
T
APDP=
, where P is an orthogonal matrix and D is a diagonal matrix.
The diagonal entries of D are positive because they are the eigenvalues of a positive definite matrix.
Since P is an orthogonal matrix,
T
PP I=
and the square matrix
T
P
is invertible. Moreover,
111
() ( ) (),
TTT
PPPP
−−
===
so
T
P
is an orthogonal matrix. Thus the factorization
T
APDP=
has
the properties that make it a singular value decomposition.
21. Let
.
T
AUV=Σ
The matrix PU is orthogonal, because P and U are both orthogonal. (See Exercise
29 in Section 6.2). So the equation
()
T
PA PU V=Σ
has the form required for a singular value
decomposition. By Exercise 19, the diagonal entries in Σ are the singular values of PA.
7.4 Solutions 441
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
22. The right singular vector
1
v
is an eigenvector for the largest eigenvector
1
of
.
T
AA
By Theorem 7
in Section 7.3, the second largest eigenvalue
2
is the maximum of
()
TT
AAxx
over all unit vectors
orthogonal to
1
v
. Since
2
()||||,
TT
AA A=xxx
the square root of
2,
which is the second largest
singular value of A, is the maximum of || Ax || over all unit vectors orthogonal to
1.v
23. From the proof of Theorem 10,
[]
11
.
rr
U
σσ
Σ=… …uu00
The column-row expansion
of the product
()
T
UVΣ
shows that
1
111
() ()
T
TT
T
rrr
T
n
AUV U
σσ
ªº
«»
=Σ=Σ=++
«»
«»
«»
¬¼
v
uv uv
v
#
where r is the rank of A.
24. From Exercise 23,
111
.
TT T
rrr
A
σσ
=++vu vu
Then since
0for ,
1for
T
ij
ij
ij
=®=
¯
uu
111
()()()
TT T T T
jrrrjjjjjjjjjjj
A
σσ σσ σ
=++ = = =uvu vuu vuuvuu v
25. Consider the SVD for the standard matrix A of T, say
T
AUV=Σ
. Let
1
{, , }
n
B=…vv
and
1
{, , }
m
C=…uu
be bases for
n
and
m
constructed respectively from the columns of V and U. Since
the columns of V are orthogonal,
T
jj
V=ve
, where
j
e
is the jth column of the n × n identity matrix.
To find the matrix of T relative to B and C, compute
()
T
jj j jjjjjjj
TAUVUU U
σσ σ
==Σ=Σ===vv v e e eu
so
[( )]
jC jj
T
σ
=ve
. Formula (4) in the discussion at the beginning of Section 5.4 shows that the
“diagonal” matrix Σ is the matrix of T relative to B and C.
26. [M] Let
18 13 4 4
219 412
.
14 11 12 8
221 4 8
A
−−
ªº
«»
«»
=«»
−−
«»
«»
¬¼
Then
528 392 224 176
392 1092 176 536 ,
224 176 192 128
176 536 128 288
T
AA
−−
ª
º
«
»
−−
«
»
=
«
»
−−
«
»
−−
«
»
¬
¼
and the
eigenvalues of
T
AA
are found to be (in decreasing order)
11600,=
2400,=
3100,=
and
40.=
Associated unit eigenvectors may be computed:
1234
.4 .8 .4 .2
.8 .4 .2 .4
:,:,:,:
.2 .4 .8 .4
.4 .2 .4 .8
−−
ªº ªº ªº ªº
«» «» «» «»
−−
«» «» «» «»
«» «» «» «»
−−
«» «» «» «»
«» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼
442 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Thus one choice for V is
.4 .8 .4 .2
.8 .4 .2 .4 .
.2 .4 .8 .4
.4 .2 .4 .8
V
−−
ªº
«»
−−
«»
=«»
−−
«»
«»
¬¼
The singular values of A are
140,
σ
=
120,
σ
=
310,
σ
=
and
40.
σ
=
Thus the matrix Σ is
40 0 0 0
020 00
.
00100
0000
ª
º
«
»
«
»
Σ=
«
»
«
»
«
»
¬
¼
Next compute
11 2 2
12
.5 .5
.5 .5
11
,,
.5 .5
.5 .5
AA
σσ
ªº ª º
«» « »
«» « »
== = =
«» « »
«» « »
«» « »
¬¼ ¬ ¼
uv u v
33
3
.5
.5
1
.5
.5
A
σ
ª
º
«
»
«
»
==
«
»
«
»
«
»
¬
¼
uv
Because Av
4
= 0, only three columns of U have been found so far. The last column of U can be found
by extending {u
1
, u
2
, u
3
} to an orthonormal basis for
4
. The vector u
4
must satisfy the set of
equations
1
0,
T
=ux
2
0,
T
=ux
and
3
0.
T
=ux
These are equivalent to the linear equations
1234
1234 4
1234
1.5
01.5
0,so ,and .
1.5
01.5
xx xx
xx x x
xx xx
−−
ªº ª º
+++= «» « »
−−
«» « »
++= = =
«» « »
++=«» « »
«» « »
¬¼ ¬ ¼
xu
Therefore, let
.5 .5 .5 .5
.5 .5 .5 .5 .
.5 .5 .5 .5
.5 .5 .5 .5
U
−−−
ªº
«»
«»
=«»
«»
«»
¬¼
Thus
.5 .5 .5 .5 40 0 0 0 .4 .8 .2 .4
.5 .5 .5 .5 0 20 0 0 .8 .4 .4 .2
.5 .5 .5 .5 0 0 10 0 .4 .2 .8 .4
.5 .5 .5 .5 0 0 0 0 .2 .4 .4 .8
T
AUV
−−− − −
ªºªºªº
«»«»«»
«»«»«»
=Σ=«»«»«»
−−
«»«»«»
−−
«»«»«»
¬¼¬¼¬¼
7.4 Solutions 443
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
27. [M] Let
68454
27564
.
01822
12448
A
−− −
ªº
«»
−−
«»
=«»
−−
«»
−− −
«»
¬¼
Then
41 32 38 14 8
32 118 3 92 74
,
38 3 121 10 52
14 92 10 81 72
8745272100
T
AA
−− −
ª
º
«
»
−−
«
»
«
»
=−− −
«
»
−−
«
»
«
»
−−
¬
¼
and the
eigenvalues of
T
AA
are found to be (in decreasing order)
1270.87,=
2147.85,=
323.73,=
418.55,=
and
50.=
Associated unit eigenvectors may be computed:
12345
.10 .39 .74 .41 .36
.61 .29 .27 .50 .48
:,:,:,:,:
.21 .84 .07 .45 .19
.52 .14 .38 .23 .72
.55 .19 .49 .58 .29
−−− −
ªº ªºªº ªº ªº
«» «»«» «» «»
−−
«» «»«» «» «»
«» «»«» «» «»
−− −
«» «»«» «» «»
−− −
«» «»«» «» «»
«» «»«» «» «»
−−
¬¼ ¬¼¬¼ ¬¼ ¬¼
Thus one choice for V is
.10 .39 .74 .41 .36
.61 .29 .27 .50 .48
.
.21 .84 .07 .45 .19
.52 .14 .38 .23 .72
.55 .19 .49 .58 .29
V
−−− −
ªº
«»
−−−
«»
«»
=−− −
«»
−− −
«»
«»
−−
¬¼
The nonzero singular values of A
are
116.46,
σ
=
112.16,
σ
=
34.87,
σ
=
and
44.31.
σ
=
Thus the matrix Σ is
16.46 0 0 0 0
012.16 0 00
.
004.8700
0004.310
ªº
«»
«»
Σ=«»
«»
«»
¬¼
Next compute
11 2 2
12
.57 .65
.63 .24
11
,,
.07 .63
.51 .34
AA
σσ
−−
ªº ªº
«» «»
«» «»
== = =
«» «»
«» «»
«» «»
¬¼ ¬¼
uv u v
33 4 4
34
.42 .27
.68 .29
11
,
.53 .56
.29 .73
AA
σσ
ª
ºªº
«
»«»
−−
«
»«»
== = =
«
»«»
«
»«»
−−
«
»«»
¬
¼¬¼
uv uv
Since
1234
{, , , }uu uu
is a basis for
4
, let
.57 .65 .42 .27
.63 .24 .68 .29 .
.07 .63 .53 .56
.51 .34 .29 .73
U
−−
ª
º
«
»
−−
«
»
=
«
»
−−
«
»
−−
«
»
¬
¼
Thus
T
AUV=Σ
444 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
.10 .61 .21 .52 .55
.57 .65 .42 .27 16.46 0 0 0 0 .39 .29 .84 .14 .19
.63 .24 .68 .29 0 12.16 0 0 0
=.74 .27 .07 .38 .49
.07 .63 .53 .56 0 0 4.87 0 0 .41 .50 .45 .23 .58
.51 .34 .29 .73 0 0 0 4.31 0 .36 .4
−−
−−
ªºª º
−−
«»« »
−−
«»« »
−−−
«»« »
−−
−−
«»« »
−−
«»« »
¬¼¬ ¼
−−8.19.72.29
ª
º
«
»
«
»
«
»
«
»
«
»
«
»
−−−
¬
¼
28. [M] Let
4037
6999
.
751019
124 1
A
−−
ªº
«»
«»
=«»
«»
−−
¬¼
Then
102 91 0 108
91 110 39 16 ,
039206246
108 16 246 492
T
AA
ª
º
«
»
−−
«
»
=
«
»
«
»
¬
¼
and the eigenvalues of
T
AA
are found to be (in decreasing order)
1649.9059,=
2218.0033,=
339.6345,=
and
42.4564.=
The singular values of A are thus
125.4933,
σ
=
214.7649,
σ
=
36.2956,
σ
=
and
41.5673.
σ
=
The condition number
14
/
6.266.
σσ
29. [M] Let
531 7 9
642 8 8
.
75310 9
964 9 5
852 11 4
A
ªº
«»
«»
«»
=
«»
−−
«»
«»
¬¼
Then
255 168 90 160 47
168 111 60 104 30
,
90 60 34 39 8
160 104 39 415 178
47 30 8 178 267
T
AA
ª
º
«
»
«
»
«
»
=
«
»
«
»
«
»
¬
¼
and the
eigenvalues of
T
AA
are found to be (in decreasing order)
1672.589,=
2280.745,=
3127.503,=
41.163,=
and
7
5
1.428 10 .
=×
The singular values of A are thus
125.9343,
σ
=
216.7554,
σ
=
311.2917,
σ
=
41.07853,
σ
=
and
5.000377928.
σ
=
The condition number
15
/
68,622.
σσ
=
7.5 SOLUTIONS
Notes:
The application presented here has turned out to be of interest to a wide variety of students,
including engineers. I cover this in Course Syllabus 3 described in the front mater of the text, but I only
have time to mention the idea briefly to my other classes.
1. The matrix of observations is
19 22 6 3 2 20
12 6 9 15 13 5
Xªº
=«»
¬¼
and the sample mean is
72 12
1.
60 10
6
Mªºªº
==
«»«»
¬¼¬¼
The mean-deviation form B is obtained by subtracting M from each column of X,
so
710 6 9 10 8
.
2415 35
B
−−−
ªº
=«»
−− −
¬¼
The sample covariance matrix is
430 135 86 27
11
135 80 27 16
61 5
T
SBB
−−
ªºªº
== =
«»«»
−−
¬¼¬¼
7.5 Solutions 445
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
2. The matrix of observations is
1526 73
3116 81511
Xªº
=«»
¬¼
and the sample mean is
24 4
1.
54 9
6
Mªºªº
==
«»«»
¬¼¬¼
The mean-deviation form B is obtained by subtracting M from each column of X,
so
31 2 23 1
.
62 3 16 2
B
−− −
ªº
=«»
−−
¬¼
The sample covariance matrix is
28 40 5.6 8
11
40 90 8 18
61 5
T
SBB
ª
ºª º
== =
«
»« »
¬
¼¬ ¼
3. The principal components of the data are the unit eigenvectors of the sample covariance matrix S.
One computes that (in descending order) the eigenvalues of
86 27
27 16
S
ª
º
=
«
»
¬
¼
are
195.2041=
and
26.79593.=
One further computes that corresponding eigenvectors are
1
2.93348
1
ªº
=«»
¬¼
v
and
2
.340892 .
1
ªº
=«»
¬¼
v
These vectors may be normalized to find the principal components, which are
1
.946515
.322659
ªº
=«»
¬¼
u
for
195.2041=
and
2
.322659
.946515
ª
º
=
«
»
¬
¼
u
for
26.79593.=
4. The principal components of the data are the unit eigenvectors of the sample covariance matrix S.
One computes that (in descending order) the eigenvalues of
5.6 8
818
S
ª
º
=
«
»
¬
¼
are
121.9213=
and
21.67874.=
One further computes that corresponding eigenvectors are
1
.490158
1
ªº
=«»
¬¼
v
and
2
2.04016 .
1
ªº
=«»
¬¼
v
These vectors may be normalized to find the principal components, which are
1
.44013
.897934
ªº
=«»
¬¼
u
for
121.9213=
and
2
.897934
.44013
ª
º
=
«
»
¬
¼
u
for
21.67874.=
5. [M] The largest eigenvalue of
164.12 32.73 81.04
32.73 539.44 249.13
81.04 249.13 189.11
S
ªº
«»
=«»
«»
¬¼
is
1677.497,=
and the first
principal component of the data is the unit eigenvector corresponding to
1
, which is
1
.129554
.874423
.467547
ªº
«»
=«»
«»
¬¼
u. The fraction of the total variance that is contained in this component is
1/ tr( ) 677.497 / (164.12 539.44 189.11) .758956S=++=
so 75.8956% of the variance of the data is
contained in the first principal component.
446 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
6. [M] The largest eigenvalue of
29.64 18.38 5.00
18.38 20.82 14.06
5.00 14.06 29.21
S
ªº
«»
=«»
«»
¬¼
is
151.6957,=
and the first principal
component of the data is the unit eigenvector corresponding to
1,
which is
1
.615525
.599424 .
.511683
ªº
«»
=«»
«»
¬¼
u Thus
one choice for the new variable is
1123
.615525 .599424 .511683 .yxxx=+ +
The fraction of the total
variance that is contained in this component is
1/tr( ) 51.6957/(29.64 20.82 29.21) .648872,S=++=
so 64.8872% of the variance of the data is
explained by
1.y
7. Since the unit eigenvector corresponding to
195.2041=
is
1
.946515 ,
.322659
ª
º
=
«
»
¬
¼
u
one choice for the
new variable is
112
.946515 .322659 .yxx=
The fraction of the total variance that is contained in this
component is
1/tr( ) 95.2041/(86 16) .933374,S=+=
so 93.3374% of the variance of the data is
explained by
1.y
8. Since the unit eigenvector corresponding to
121.9213=
is
1
.44013 ,
.897934
ª
º
=
«
»
¬
¼
u
one choice for the new
variable is
11 2
.44013 .897934 .yx x=+
The fraction of the total variance that is contained in this
component is
1/tr( ) 21.9213/(5.6 18) .928869,S=+=
so 92.8869% of the variance of the data is
explained by
1.y
9. The largest eigenvalue of
520
262
027
S
ªº
«»
=«»
«»
¬¼
is
19,=
and the first principal component of the data is
the unit eigenvector corresponding to
1,
which is
1
1/3
2/3 .
2/3
ª
º
«
»
=
«
»
«
»
¬
¼
u Thus one choice for y is
123
(1 / 3) ( 2 / 3) ( 2 / 3) ,yx x x=+ +
and the variance of y is
19.=
10. [M] The largest eigenvalue of
542
4114
245
S
ªº
«»
=«»
«»
¬¼
is
115,=
and the first principal component of the
data is the unit eigenvector corresponding to
1,
which is
1
1/ 6
2/ 6 .
1/ 6
ª
º
«
»
=
«
»
«
»
«
»
¬
¼
u
Thus one choice for y is
123
(1 / 6 ) ( 2 / 6 ) (1 / 6 ) ,yx x x=+ +
and the variance of y is
115.=
11. a. If w is the vector in
N
with a 1 in each position, then
[]
11NN
…=++=XXwXX0
since
the
k
X
are in mean-deviation form. Then
Chapter 7 Supplementary Exercises 447
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
[] [ ]
11 1
TTT T
NNN
PPP P
ªº
…= = … ==
¬¼
YYwX XwX Xw00
Thus
1,
N
+…+ =YY0
and the
k
Y
are in mean-deviation form.
b. By part a., the covariance matrix
SY
of
1,,
N
YY
is
[][]
11
1
1
T
NN
SN
=… …
Y
YYYY
[][]
11
1()
1
TTT
NN
PP
N
=… …
XXXX
[][]
11
1
1
T
TT
NN
PPPSP
N
§·
=…=
¨¸
©¹
XXXX
since the
k
X
are in mean-deviation form.
12. By Exercise 11, the change of variables X = PY changes the covariance matrix S of X into the
covariance matrix
T
PSP
of Y. The total variance of the data as described by Y is
tr( ).
T
PSP
However, since
T
PSP
is similar to S, they have the same trace (by Exercise 25 in Section 5.4). Thus
the total variance of the data is unchanged by the change of variables X = PY.
13. Let M be the sample mean for the data, and let
ˆ.
kk
=XXM
Let
1
ˆˆ
N
B
ª
º
=…
¬
¼
XX
be the
matrix of observations in mean-deviation form. By the row-column expansion of
,
T
BB
the sample
covariance matrix is
1
1
T
SBB
N
=
1
1
ˆ
1ˆˆ
1ˆ
T
N
T
N
N
ªº
«»
ªº
=…
«»
¬¼
«»
¬¼
X
XX
X
#
1
1ˆˆ
1
NN
TT
kk k k
kk
NN
==1
1
==()( )
−−1
¦¦
XX X M X M
Chapter 7 SUPPLEMENTARY EXERCISES
1. a. True. This is just part of Theorem 2 in Section 7.1. The proof appears just before the statement
of the theorem.
b. False. A counterexample is
01
.
10
A
ªº
=«»
¬¼
c. True. This is proved in the first part of the proof of Theorem 6 in Section 7.3. It is also a
consequence of Theorem 7 in Section 6.2.
d. False. The principal axes of
T
Axx
are the columns of any orthogonal matrix P that
diagonalizes A. Note: When A has an eigenvalue whose eigenspace has dimension greater than
1, the principal axes are not uniquely determined.
448 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
e. False. A counterexample is
11
.
11
P
ªº
=«»
¬¼
The columns here are orthogonal but not
orthonormal.
f. False. See Example 6 in Section 7.2.
g. False. A counterexample is
20
03
Aªº
=«»
¬¼
and
1.
0
ª
º
=
«
»
¬
¼
x
Then
20
TA=>xx
, but
T
Axx
is an
indefinite quadratic form.
h. True. This is basically the Principal Axes Theorem from Section 7.2. Any quadratic form can be
written as
T
Axx
for some symmetric matrix A.
i. False. See Example 3 in Section 7.3.
j. False. The maximum value must be computed over the set of unit vectors. Without a restriction
on the norm of x, the values of
T
Axx
can be made as large as desired.
k. False. Any orthogonal change of variable x = Py changes a positive definite quadratic form into
another positive definite quadratic form. Proof: By Theorem 5 of Section 7.2., the classification
of a quadratic form is determined by the eigenvalues of the matrix of the form. Given a form
,
T
Axx
the matrix of the new quadratic form is
1
,PAP
which is similar to A and thus has the
same eigenvalues as A.
l. False. The term “definite eigenvalue” is undefined and therefore meaningless.
m. True. If x = Py, then
1
()()
TT TTT
APAP PAP PAP
===xx y y y yy y
.
n. False. A counterexample is
11
.
11
U
ª
º
=
«
»
¬
¼
The columns of U must be orthonormal to make
T
UU x
the orthogonal projection of x onto Col U.
o. True. This follows from the discussion in Example 2 of Section 7.4., which refers to a proof
given in Example 1.
p. True. Theorem 10 in Section 7.4 writes the decomposition in the form
,
T
UVΣ
where U and V
are orthogonal matrices. In this case,
T
V
is also an orthogonal matrix. Proof: Since V is
orthogonal, V is invertible and
1.
T
VV
=
Then
11
() ( ) (),
TTTT
VVV
−−
==
and since V is square
and invertible,
T
V
is an orthogonal matrix.
q. False. A counterexample is
20
.
01
A
ª
º
=
«
»
¬
¼
The singular values of A are 2 and 1, but the singular
values of
T
AA
are 4 and 1.
2. a. Each term in the expansion of A is symmetric by Exercise 35 in Section 7.1. The fact that
()
TT T
BC B C+=+
implies that any sum of symmetric matrices is symmetric, so A is
symmetric.
b . Since
11
1
T
=uu
and
1
0
T
j
=uu
for j 1,
11111 11111 111
()()() ()
TTTT
nnn nn n
A=++ =++ =uuuu uuuuuu uuuu
Since
1u0
,
1
is an eigenvalue of A. A similar argument shows that
j
is an eigenvalue of A
for j = 2, , n.
Chapter 7 Supplementary Exercises 449
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
3. If rank A = r, then dim
Nul A = n r by the Rank Theorem. So 0 is an eigenvalue of A with
multiplicity n r, and of the n terms in the spectral decomposition of A exactly n r are zero. The
remaining r terms (which correspond to nonzero eigenvalues) are all rank 1 matrices, as mentioned
in the discussion of the spectral decomposition.
4. a. By Theorem 3 in Section 6.1,
(Col ) Nul Nul
T
AAA
==
since
.
T
AA=
b . Let y be in
n
. By the Orthogonal Decomposition Theorem in Section 6.3, y =
ˆ
y
+ z, where
ˆ
y
is
in Col A and z is in
(Col ) .A
By part a., z is in Nul A.
5. If Av = λv for some nonzero λ, then
11
(),AA
−−
==vv v
which shows that v is a linear
combination of the columns of A.
6. Because A is symmetric, there is an orthonormal eigenvector basis
1
{, , }
n
uu
for
n
. Let r = rank A.
If r = 0, then A = O and the decomposition of Exercise 4(b) is y = 0 + y for each y in
n
; if r = n then
the decomposition is y = y + 0 for each y in
n
.
Assume that 0 < r < n. Then dim
Nul A = n r by the Rank Theorem, and so 0 is an eigenvalue of A
with multiplicity n r. Hence there are r nonzero eigenvalues, counted according to their
multiplicities. Renumber the eigenvector basis if necessary so that
1,,
r
uu
are the eigenvectors
corresponding to the nonzero eigenvalues. By Exercise 5,
1,,
r
uu
are in Col A. Also,
1,,
rn+uu
are in Nul A because these vectors are eigenvectors corresponding to the eigenvalue 0. For y in
n
,
there are scalars
1,,
n
cc
such that
11 1 1
ˆ
rr r r nn
ccc c
++
=++ + ++
Z
yu u u u
y
   
This provides the decomposition in Exercise 4(b).
7. If
T
ARR=
and R is invertible, then A is positive definite by Exercise 25 in Section 7.2.
Conversely, suppose that A is positive definite. Then by Exercise 26 in Section 7.2,
T
ABB=
for
some positive definite matrix B. Since the eigenvalues of B are positive, 0 is not an eigenvalue of B
and B is invertible. Thus the columns of B are linearly independent. By Theorem 12 in Section 6.4, B
= QR for some n × n matrix Q with orthonormal columns and some upper triangular matrix R with
positive entries on its diagonal. Since Q is a square matrix,
,
T
QQ I=
and
()()
TT TTT
ABB QR QR RQQRRR== = =
and R has the required properties.
8. Suppose that A is positive definite, and consider a Cholesky factorization of
T
ARR=
with R upper
triangular and having positive entries on its diagonal. Let D be the diagonal matrix whose diagonal
entries are the entries on the diagonal of R. Since right-multiplication by a diagonal matrix scales the
columns of the matrix on its left, the matrix
1T
LRD
=
is lower triangular with 1’s on its diagonal.
If U = DR, then
1.
T
ARDDRLU
==
9. If A is an m × n matrix and x is in
n
, then
2
()()|| || 0.
TT T
AA A A A==xxxx x
Thus
T
AA
is positive
semidefinite. By Exercise 22 in Section 6.5,
rank rank .
T
AA A=
450 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
10. If rank G = r, then dim
Nul G = n r by the Rank Theorem. Hence 0 is an eigenvalue of G with
multiplicity n r, and the spectral decomposition of G is
111
TT
rrr
G=++uu uu
Also
1,,
r
are positive because G is positive semidefinite. Thus
()
()
()
()
11 11
TT
rr rr
G=++uu u u
By the column-row expansion of a matrix product,
T
GBB=
where B is the n × r matrix
11
.
rr
Bªº
=…
¬¼
uu
Finally,
T
GAA=
for
.
T
AB=
11. Let
T
AUV=Σ
be a singular value decomposition of A. Since U is orthogonal,
T
UU I=
and
TT
AUUUV PQ=Σ=
where
1T
PUU UU
=Σ=Σ
and
.
T
QUV=
Since Σ is symmetric, P is
symmetric, and P has nonnegative eigenvalues because it is similar to Σ, which is diagonal with
nonnegative diagonal entries. Thus P is positive semidefinite. The matrix Q is orthogonal since it is
the product of orthogonal matrices.
12. a. Because the columns of
r
V
are orthonormal,
11
()( )( )
TT TT
rr r r r r rr
AA U DV V D U U DD U U U
+−−
===yyyy
Since
T
rr
UU y
is the orthogonal projection of y onto
Col r
U
by Theorem 10 in Section 6.3, and
since
Col Col
r
UA=
by (5) in Example 6 of Section 7.4,
AA
+
y
is the orthogonal projection of
y onto Col A.
b . Because the columns of
r
U
are orthonormal,
11
()()( )
TT T T
rrrr r r rr
AA VD U UDV VD DV VV
+−−
===xxxx
Since
T
rr
VV x
is the orthogonal projection of x onto
Col r
V
by Theorem 10 in Section 6.3, and
since
Col Row
r
VA=
by (8) in Example 6 of Section 7.4,
AA
+
x
is the orthogonal projection of
x onto Row A.
c . Using the reduced singular value decomposition, the definition of
A
+
, and the associativity of
matrix multiplication gives:
11
()( )()( )()
TTT TT
rr r r rr r r rr
AA A U DV V D U U DV U DD U U DV
+−−
==
1TT
rrrr
UDD DV UDV A
===
1111
()()()( )()
TT T T T
rrrrrr r rrr
AAA VD U UDV VD U VD DV VD U
++ −−
==
11 1TT
rrrr
VD DD U VD U A
−− +
===
13. a. If b = Ax, then
.AAA
++ +
==xb x
By Exercise 12(a),
+
x
is the orthogonal projection of x onto
Row A.
b. From part (a) and Exercise 12(c),
()() .AAAA AAAA
++ +
====xxxxb
c. Let Au = b. Since
+
x
is the orthogonal projection of x onto Row A, the Pythagorean Theorem
shows that
22 2 2
|| || || || || || || || ,
+++
=+−≥ux ux x
with equality only if
.
+
=ux
Chapter 7 Supplementary Exercises 451
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
14. The least-squares solutions of Ax = b are precisely the solutions of Ax =
ˆ,b
where
ˆ
b
is the
orthogonal projection of b onto Col A. From Exercise 13, the minimum length solution of Ax =
ˆ
b
is
ˆ,A
+
b
so
ˆ
A+b
is the minimum length least-squares solution of Ax = b. However,
ˆAA+
=bb
by
Exercise 12(a) and hence
ˆ
AAAA
+++ +
==Αbbb
by Exercise 12(c). Thus
A+b
is the minimum
length least-squares solution of Ax = b.
15. [M] The reduced SVD of A is
,
T
rr
AUDV=
where
.966641 .253758 .034804 9.84443 0 0
.185205 .786338 .589382 ,02.624660,
.125107 .398296 .570709 001.09467
.125107 .398296 .570709
r
UD
ªº
ª
º
«»
−−
«
»
«»
==
«
»
«»
«
»
«»
¬
¼
«»
¬¼
.313388 .009549 .633795
.313388 .009549 .633795
and .633380 .023005 .313529
.633380 .023005 .313529
.035148 .999379 .002322
r
V
ªº
«»
«»
«»
=−−
«»
«»
«»
¬¼
So the pseudoinverse
1T
rr
AVDU
+
=
may be calculated, as well as the solution
ˆA+
=xb
for the
system Ax = b:
.05 .35 .325 .325
.05 .35 .325 .325
ˆ
,
.05 .15 .175 .175
.05 .15 .175 .175
.10 .30 .150 .150
A
+
−− .7
ªºªº
«»«»
−− .7
«»«»
«»«»
==
−−.8
«»«»
.8
«»«»
«»«»
−− − .6
¬¼¬¼
x
Row reducing the augmented matrix for the system
ˆ
T
A=zx
shows that this system has a solution, so
ˆ
x is in
Col Row .
T
AA=
A basis for Nul A is
12
01
01
{, } , ,
10
10
00
½
ª
ºª º
°
°
«
»« »
°
°
«
»« »
°
°
«
»« »
=
®
¾
«
»« »
°
°
«
»« »
°
°
«
»« »
°
°
¬
¼¬ ¼
¯¿
aa
and an arbitrary element of
Nul A is
12
.cd=+ua a
One computes that
ˆ
|| ,
||
=131/50x
while
ˆ
|| .cd
22
+
||
=(131/50)+2 +2xu
Thus if u 0, || ˆ
x|| < || ˆ
x + u ||, which confirms that ˆ
x is the minimum length solution to Ax = b.
16. [M] The reduced SVD of A is
,
T
rr
AUDV=
where
.337977 .936307 .095396 12.9536 0 0
.591763 .290230 .752053 ,01.445530,
.231428 .062526 .206232 00.337763
.694283 .187578 .618696
r
UD
ªº
ª
º
«»
«
»
«»
==
«
»
«»
−−−
«
»
«»
¬
¼
−−−
«»
¬¼
452 CHAPTER 7 Symmetric Matrices and Quadratic Forms
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
.690099 .721920 .050939
00 0
and .341800 .387156 .856320
.637916 .573534 .513928
00 0
r
V
ªº
«»
«»
«»
=
«»
«»
«»
¬¼
So the pseudoinverse
1T
rr
AVDU
+
=
may be calculated, as well as the solution
ˆA+
=xb
for the
system Ax = b:
.5 0 .05 .15
00 0 0
ˆ
,
02 .5 1.5
.5 1 .35 1.05
00 0 0
A
+
−− 2.3
ªºªº
«»«»
0
«»«»
«»«»
==
5.0
«»«»
−− − .9
«»«»
«»«»
0
¬¼¬¼
x
Row reducing the augmented matrix for the system
ˆ
T
A=zx
shows that this system has a solution, so
ˆ
x is in
Col Row
T
AA=
. A basis for Nul A is
12
00
10
{, } , ,
00
00
01
½
ª
ºªº
°
°
«
»«»
°
°
«
»«»
°
°
«
»«»
=
®
¾
«
»«»
°
°
«
»«»
°
°
«
»«»
°
°
¬
¼¬¼
¯¿
aa
and an arbitrary element of
Nul A is
12
.cd=+ua a
One computes that
ˆ
|| ,
||
=311/10x
while
ˆ
|| .cd
22
+||=(311/10)+ +xu
Thus if u 0, || ˆ
x|| < || ˆ
x+ u ||, which confirms that ˆ
x is the minimum length solution to Ax = b.
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley"
!
453
!
!
! "#$!%$&'$()*!&+!
,$-(&)!./0-$1!
!
8.1 SOLUTIONS
Notes
. This section introduces a special kinds of linear combination used to describe the sets created
when a subspace is shifted away from the origin. An affine combination is a linear combination in which
the coefficients sum to one. Theorems 1, 3, and 4 connect affine combinations directly to linear
combinations. There are several approaches to solving many of the exercises in this section, and some of
the alternatives are presented here.
1.
12 3 4
12035
,,4,,
22473
ªº ª º ªº ªº ªº
== = ==
«» « » «» «» «»
¬¼ ¬ ¼ ¬¼ ¬¼ ¬¼
vv v v y
21 31 41 1
3124
,,,
0251
−−
ªº ªº ªº ªº
====
«» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼
vv vv vv yv
Solve c
2
(v
2
v
1
) + c
3
(v
3
v
1
) + c
4
(v
4
v
1
) = y v
1
by row reducing the augmented matrix.
3124 312 4 304.54.5 101.51.5
0251 012.5.5 012.5.5 012.5 .5
−− −− − −
ªºª ºª ºª º
«»« »« »« »
¬¼¬ ¼¬ ¼¬ ¼

The general solution is c
2
= 1.5c
4
1.5, c
3
= 2.5c
4
+ .5, with c
4
free. When c
4
= 0,
y v
1
= 1.5(v
2
v
1
) + .5(v
3
v
1
) and y = 2v
1
1.5v
2
+ .5v
3
If c
4
= 1, then c
2
= 0 and
y v
1
= 2(v
3
v
1
) + 1(v
4
v
1
) and y = 2v
1
2v
3
+ v
4
If c
4
= 3, then
y v
1
= 3(v
2
v
1
) 7(v
3
v
1
) + 3(v
4
v
1
) and y = 2v
1
+ 3v
2
7v
3
+ 3v
4
Of course, many other answers are possible. Note that in all cases, the weights in the linear
combination sum to one.
2.
12 3
1135
,,,
1227
ªº ª º ª º ª º
== ==
«» « » « » « »
¬¼ ¬ ¼ ¬ ¼ ¬ ¼
vv v y
, so
21 31 1
22 4
,,and
11 6
ª
ºªº ªº
===
«
»«» «»
¬
¼¬¼ ¬¼
vv vv yv
Solve c
2
(v
2
v
1
) + c
3
(v
3
v
1
) = yv
1
by row reducing the augmented matrix:
454 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
224 112 102
~~
116 028 014
−−
ªºªºªº
«»«»«»
¬¼¬¼¬¼
The general solution is c
2
= 2 and c
3
= 4, so yv
1
= 2(v
2
v
1
) + 4(v
3
v
1
) and
y = –5v
1
+ 2v
2
+ 4v
3
. The weights sum to one, so this is an affine sum.
3. Row reduce the augmented matrix [v
2
-v
1
v
3
-v
1
y-v
1
] to find a solution for writing y-v
1
in terms of
v
2
-v
1
and v
3
-v
2
. Then solve for y to get y = 3v
1
+ 2v
2
+ 2v
3
. The weights sum to one, so this is an
affine sum.
4. Row reduce the augmented matrix [v
2
-v
1
v
3
-v
1
y-v
1
] to find a solution for writing y-v
1
in terms of
v
2
-v
1
and v
3
-v
2
. Then solve for y to get y = 2.6v
1
.4v
2
1.2v
3
. The weights sum to one, so this is
an affine sum.
5. Since {b
1
, b
2
, b
3
} is an orthogonal basis, use Theorem 5 from Section 6.2 to write
12 3
12 3
11 2 2 3 3
= +
jj j
j
+
pb pb pb
pbbb
bb bb bb
<< <
<< <
a. p
1
= 3b
1
b
2
b
3
aff S since the coefficients sum to one.
b. p
2
= 2b
1
+ 0b
2
+ b
3
aff S since the coefficients do not sum to one.
c. p
3
= – b
1
+ 2b
2
+ 0b
3
aff S since the coefficients sum to one.
6. Since {b
1
, b
2
, b
3
} is an orthogonal basis, use Theorem 5 from Section 6.2 to write
12 3
12 3
11 2 2 3 3
= +
jj j
j
+
pb pb pb
pbbb
bb bb bb
<< <
<< <
a. p
1
= – 4b
1
+2b
2
+3b
3
aff S since the coefficients sum to one.
b. p
2
= .2b
1
+ .5b
2
+ .3b
3
aff S since the coefficients sum to one.
c. p
3
= b
1
+ b
2
+ b
3
aff S since the coefficients do not sum to one.
7. The matrix [v
1
v
2
v
3
p
1
p
2
p
3
] row reduces to
100 2 2 2
010 1 4 2
001 1 3 2
000 0 0 5
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
.
Parts a., b., and c. use columns 4, 5, and 6, respectively, as the “augmented” column.
a. p
1
= 2v
1
+ v
2
v
3
, so p
1
is in Span S. The weights do not sum to one, so p
1
aff S.
b. p
2
= 2v
1
4v
2
+ 3v
3
, so p
2
is in Span S. The weights sum to one, so p
2
aff S.
c. p
3
Span S because 0 v 5, so p
3
cannot possibly be in aff S.
8.1 Solutions 455
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
8. The matrix [v
1
v
2
v
3
p
1
p
2
p
3
] row reduces to
100 3 0 2
010 10 6
001 1 0 3
000 0 1 0
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
.
Parts a., b., and c. use columns 4, 5, and 6, respectively, as the “augmented” column.
a. p
1
= 3v
1
v
2
+ v
3
, so p
1
is in Span S. The weights do not sum to one, so p
1
aff S.
b. p
2
Span S because 0 v 1 (column 5 is the augmented column), so p
2
cannot possibly be
in aff S.
c. p
3
= 2v
1
+ 6v
2
3v
3
, so p
3
is in Span S. The weights sum to one, so p
3
aff S.
9. Choose v
1
and v
2
to be any two point on the line x=x
3
u+p. For example, take x
3
=0 and x
3
=1 to get
1
3
0
ªº
=«»
¬¼
v
and
2
1
2
ª
º
=
«
»
¬
¼
v
respectively. Other answers are possible.
10. Choose !
"
!and !
#
!to be any two point on the line x=x
3
u+p. For example, take x
3
=0 and x
3
=1 to get
1
1
3
4
ªº
«»
=
«»
«»
¬¼
v
and
2
6
2
2
ª
º
«
»
=
«
»
«
»
¬
¼
v
respectively. Other answers are possible.
11. a. True. See the definition at the beginning of this section.
b. False. The weights in the linear combination must sum to one. See the definition.
c. True. See equation (1).
d. False. A flat is a translate of a subspace. See the definition prior to Theorem 3.
e. True. A hyperplane in
3
has dimension 2, so it is a plane. See the definition prior to Theorem 3.
12. a. False. If S = {x}, then aff S = {x}. See the definition at the beginning of this section.
b. True. Theorem 2.
c. True. See the definition prior to Theorem 3.
d. False. A flat of dimension 2 is called a hyperplane only if the flat is considered a subset of
3
. In
general, a hyperplane is a flat of dimension n 1. See the definition prior to Theorem 3.
e. True. A flat through the origin is a subspace translated by the 0 vector.
13. Span {v
2
v
1
, v
3
v
1
} is a plane if and only if {v
2
v
1
, v
3
v
1
} is linearly independent. Suppose c
2
and c
3
satisfy c
2
(v
2
v
1
) + c
3
(v
3
v
1
) = 0. Then c
2
v
2
+ c
3
v
3
(c
2
+ c
3
)v
1
= 0. Then c
2
= c
3
= 0,
because {v
1
, v
2
, v
3
} is a linearly independent set. This shows that {v
2
v
1
, v
3
v
1
} is a linearly
independent set. Thus, Span {v
2
v
1
, v
3
v
1
} is a plane in
3
.
14. Since {v
1
, v
2
, v
3
} is a basis for
3
, the set W = Span {v
2
v
1
, v
3
v
1
} is a plane in
3
, by
Exercise 13. Thus, W + v
1
is a plane parallel to W that contains v
1
. Since v
2
= (v
2
v
1
) + v
1
, W +
v
1
contains v
2
. Similarly, W + v
1
contains v
3
. Finally, Theorem 1 shows that aff {v
1
, v
2
, v
3
} is the
plane W + v
1
that contains v
1
, v
2
, and v
3
.
15. Let S = {x : Ax = b}. To show that S is affine, it suffices to show that S is a flat, by Theorem 3.
Let W = {x : Ax = 0}. Then W is a subspace of
n
, by Theorem 2 in Section 4.2 (or Theorem 12
in Section 2.8). Since S = W + p, where p satisfies Ap = b, by Theorem 6 in Section 1.5, S is a
translate of W, and hence S is a flat.
456 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
16. Suppose p, q S and t . Then, by properties of the dot product (Theorem 1 in Section 6.1),
[(1 t)p + t q] v = (1 t)(p v) + t (q v) = (1 t)k + t k = k
Thus, [(1 t)p + t q] S, by definition of S. This shows that S is an affine set.
17. A suitable set consists of any three vectors that are not collinear and have 5 as their third entry. If
5 is their third entry, they lie in the plane x
3
= 5. If the vectors are not collinear, their affine hull
cannot be a line, so it must be the plane. For example use
101
0,1,1 .
555
S
½
ª
ºªºªº
°
°
«
»«»«»
=
®
¾
«
»«»«»
°
°
«
»«»«»
¬
¼¬¼¬¼
¯¿
18. A suitable set consists of any four vectors that lie in the plane 2x
1
+ x
2
3x
3
= 12 and are not col-
linear. If the vectors are not collinear, their affine hull cannot be a line, so it must be the plane.
For example use
60 0 3
0,12, 0 , 3 .
00 4 1
S
½
ª
ºª ºª ºª º
°
°
«
»« »« »« »
=
®
¾
«
»« »« »« »
°
°
«
»« »« »« »
−−
¬
¼¬ ¼¬ ¼¬ ¼
¯¿
19. If p, q f (S), then there exist r, s S such that f (r) = p and f (s) = q. Given any t , we must
show that z = (1 t)p + t q is in f (S). Since f is linear,
z = (1 t)p + t q = (1 t) f (r) + t f (s) = f ((1 t)r + t s)
Since S is affine, (1 t)r + t s S. Thus, z is in S and f (S) is affine.
20. Given an affine set T, let S = {x
n
: f (x) T}. Consider x, y S and t . Then
f ((1 t)x + t y) = (1 t) f (x) + t f (y)
But f (x) T and f (y) T, so (1 t) f (x) + t f (y) T because T is an affine set. It follows that
[(1 t)x + t y] S. This is true for all x, y S and t , so S is an affine set.
21. Since B is affine, Theorem 1 implies that B contains all affine combinations of points of B. Hence
B contains all affine combinations of points of A. That is, aff A B.
22. Since B aff B, we have A B aff B. But aff B is an affine set, so Exercise 21 implies
aff A aff B.
23. Since A (A B), it follows from Exercise 22 that aff A aff (A B).
Similarly, aff B aff (A B), so [aff A aff B] aff (A B).
24. One possibility is to let A ={(0, 1)} and B = {(1, 0)}. Then (aff A) (aff B) consists of the two
coordinate axes, but aff (A B) =
2
.
25. Since (A B) A, it follows from Exercise 22 that aff (A B) aff A.
Similarly, aff (A B) aff B, so aff (A B) (aff A aff B).
26. One possibility is to let A = {(0, 1)} and B = {(0, 2)}. Then both aff A and aff B are equal to the
x-axis. But A B = , so aff (A B) = .
8.2 Solutions 457
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
8.2 SOLUTIONS
Notes:
Affine dependence and independence are developed in this section. Theorem 5 links affine
independence to linear independence. This material has important applications to computer graphics.
1. Let
123
302
,,.
360
ªº ªº ªº
===
«» «» «»
¬¼ ¬¼ ¬¼
vvv
Then
21 31
31
,.
93
−−
ª
ºªº
==
«
»«»
¬
¼¬¼
vv vv
Since v
3
v
1
is a multiple
of v
2
v
1
, these two points are linearly dependent. By Theorem 5, {v
1
, v
2
, v
3
} is affinely dependent.
Note that (v
2
v
1
) 3(v
3
v
1
) = 0. A rearrangement produces the affine dependence relation 2v
1
+ v
2
3v
3
= 0. (Note that the weights sum to zero.) Geometrically, v
1
, v
2
, and v
3
are collinear.
2.
123 2131
25 3 3 5
,, . ,
14 2 3 3
−−
ªº ªº ª º ªº ª º
=== ==
«» «» « » «» « »
−−
¬¼ ¬¼ ¬ ¼ ¬¼ ¬ ¼
vv v vvvv
. Since v
3
v
1
and v
2
v
1
are not
multiples, they are linearly independent. By Theorem 5, {v
1
, v
2
, v
3
} is affinely independent.
3. The set is affinely independent. If the points are called v
1
, v
2
, v
3
, and v
4
, then row reduction of
[v
1
v
2
v
3
v
4
] shows that {v
1
, v
2
, v
3
} is a basis for
3
and v
4
= 16v
1
+ 5v
2
3v
3
. Since there is
unique way to write v
4
in terms of the basis vectors, and the weights in the linear combination do not
sum to one, v
4
is not an affine combination of the first three vectors.
4. Name the points v
1
, v
2
, v
3
, and v
4
. Then
21 31 41
230
8, 7, 2
496
ª
ºªºªº
«
»«»«»
=−−=−−=
«
»«»«»
«
»«»«»
−−
¬
¼¬¼¬¼
vv vv vv
. To study
the linear independence of these points, row reduce the augmented matrix for Ax = 0:
2300 2 300 2300 10.60
8720~0 520~0520~01.40
4960 01560 0000 00 00
ªºªºªºªº
«»«»«»«»
−−
«»«»«»«»
«»«»«»«»
−− − −
¬¼¬¼¬¼¬¼
. The first three columns
are linearly dependent, so {v
1
, v
2
, v
3
, v
4
} is affinely dependent, by Theorem 5. To find the affine
dependence relation, write the general solution of this system: x
1
= .6x
3
, x
2
= .4x
3
, with x
3
free. Set
x
3
= 5, for instance. Then x
1
= 3, x
2
= 2, and x
3
= 5. Thus, 3(v
2
v
1
) 2(v
3
v
1
) + 5(v
4
v
1
) = 0.
Rearrange to obtain 6v
1
+ 3v
2
2v
3
+ 5v
4
= 0.
Alternative solution: Name the points v
1
, v
2
, v
3
, and v
4
. Use Theorem 5(d) and study the
homogeneous forms of the points. The first step is to move the bottom row of ones (in the
augmented matrix) to the top to simplify the arithmetic:
[]
1234
1111 1001.2
2012 010.6
~~
5327 001.4
376 3 000 0
ª
ºª º
«
»« »
−− −
«
»« »
«
»« »
−−
«
»« »
−−
«
»« »
¬
¼¬ ¼
vv v v
  
Thus, x
1
+ 1.2x
4
= 0, x
2
.6x
4
= 0, and x
3
+ .4x
4
= 0, with x
4
free. Take x
4
= 5, for example, and
get x
1
= 6, x
2
= 3, and x
3
= 2. An affine dependence relation is 6v
1
+ 3v
2
2v
3
+ 5v
4
= 0.
5.
4v
1
+ 5v
2
4v
3
+ 3v
4
= 0 is an affine dependence relation. It can be found by row reducing the
matrix
[]
1234
,vv v v
  
and proceeding as in the solution to Exercise 4.
!
458 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
6. The set is affinely independent, as the following calculation with homogeneous forms shows:
[]
1234
1111 1000
1023 0100
~~
3155 0010
1220 0001
ª
ºª º
«
»« »
«
»« »
«
»« »
«
»« »
«
»« »
¬
¼¬ ¼
vv vv
  
Alternative solution: Row reduction of [v
1
v
2
v
3
v
4
] shows that {v
1
, v
2
, v
3
} is a basis for
3
and
v
4
= 2v
1
+ 1.5v
2
+ 2.5v
3
, but the weights in the linear combination do not sum to one, so this v
4
is
not an affine combination of the basis vectors and hence the set is affinely independent.
Note: A potential exam question might be to change the last entry of v
4
from 0 to 1 and again ask if
the set is affinely independent. Notice that row reduction of this new set of vectors [v
1
v
2
v
3
v
4
]
shows that {v
1
, v
2
, v
3
} is a basis for
3
and v
4
= 3v
1
+ v
2
+ 3v
3
is an affine combination of the basis.
7. Denote the given points as v
1
, v
2
, v
3
, and p. Row reduce the augmented matrix for the equation
x
1
vɫ
1
+ x
2
vɫ
2
+ x
3
vɫ
3
= p.ɫ Remember to move the bottom row of ones to the top as the first step to
simplify the arithmetic by hand.
[]
123
1111 1002
12 15 0 104
~~
1124 0011
2022 0000
1102 0000
ª
ºª º
«
»« »
«
»« »
«
»« »
−−
«
»« »
−−
«
»« »
«
»« »
¬
¼¬ ¼
vv vp
 
Thus, x
1
= 2, x
2
= 4, x
3
= 1, and pɫ = 2vɫ
1
+ 4vɫ
2
vɫ
3
, so p = 2v
1
+ 4v
2
v
3
, and the barycentric
coordinates are (2, 4, 1).
Alternative solution: Another way that this problem can be solved is by “translating” it to the
origin. That is, compute v
2
v
1
, v
3
v
1
, and pv
1
, find weights c
2
and c
3
such that
c
2
(v
2
v
1
) + c
3
(v
3
v
1
) = pv
1
and then write p = (1 – c
2
c
3
)v
1
+ c
2
v
2
+ c
3
v
3
. Here are the calculations for Exercise 7:
21
21 1
11 2
02 2
11 0
ªº ª º ª º
«» « » « »
«» « » « »
==
«» « » « »
«» « » « »
¬¼ ¬ ¼ ¬ ¼
vv
,
31
11 0
21 3
22 4
01 1
ª
ºª º ª º
«
»« » « »
«
»« » « »
==
«
»« » « »
−−
«
»« » « »
¬
¼¬ ¼ ¬ ¼
vv
,
1
51 4
41 5
22 4
21 1
ªºªº ª º
«»«» « »
«»«» « »
==
«»«» « »
−−
«»«» « »
¬¼¬¼ ¬ ¼
pv
[]
2131 1
104 104
235 011
~~
244 000
011 000
ª
ºª º
«
»« »
«
»« »
−−
«
»« »
−− −
«
»« »
¬
¼¬ ¼
vvvvpv
Thus pv
1
= 4(v
2
v
1
) – 1(v
3
v
1
), and p = –2 v
1
+ 4v
2
v
3
.
!
8.2 Solutions 459
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
8. Denote the given points as v
1
, v
2
, v
3
, and p. Row reduce the augmented matrix for the equation
x
1
vɫ
1
+ x
2
vɫ
2
+ x
3
vɫ
3
= pɫ.
[]
123
1111 1002
0111 0101
~~
1141 0010
2064 0000
1250 0000
ªºªº
«»«»
−−
«»«»
«»«»
«»«»
−−
«»«»
«»«»
¬¼¬¼
vv vp
 
Thus. pɫ = 2vɫ
1
vɫ
2
+ 0vɫ
3
, so p = 2v
1
v
2
. The barycentric coordinates are (2 , 1, 0).
Notice v
3
= 3v
1
+ v
2
9. a. True. Theorem 5 uses the point v
1
for the translation, but the paragraph after the theorem
points out that any one of the points in the set can be used for the translation.
b. False, by (d) of Theorem 5.
c. False. The weights in the linear combination must sum to zero, not one. See the definition at the
beginning of this section.
d. False. The only points that have barycentric coordinates determined by S belong to aff S. See the
definition after Theorem 6.
e. True. The barycentric coordinates have some zeros on the edges of the triangle and are only
positive for interior points. See Example 6.
10. a. False. By Theorem 5, the set of homogeneous forms must be linearly dependent, too.
b. True. If one statement in Theorem 5 is false, the other statements are false, too.
c. False. Theorem 6 applies only when S is affinely independent.
d. False. The color interpolation applies only to points whose barycentric coordinates are
nonnegative, since the colors are formed by nonnegative combinations of red, green, and blue.
See Example 5.
e. True. See the discussion of Fig. 5.
11. When a set of five points is translated by subtracting, say, the first point, the new set of four
points must be linearly dependent, by Theorem 8 in Section 1.7, because the four points are in
3
.
By Theorem 5, the original set of five points is affinely dependent.
12. Suppose v
1
, …, v
p
are in
n
and p n + 2. Since p
1 n + 1, the points v
2
v
1
, v
3
v
1
, … , v
p
v
1
are linearly dependent, by Theorem 8 in Section 1.7. By Theorem 5, {v
1
, v
2
, …, v
p
} is affinely
dependent.
13. If {v
1
, v
2
} is affinely dependent, then there exist c
1
and c
2
, not both zero, such that c
1
+ c
2
= 0, and
c
1
v
1
+ c
2
v
2
= 0. Then c
1
= c
2
0 and c
1
v
1
= c
2
v
2
= c
1
v
2
, which implies that v
1
= v
2
.
Conversely, if v
1
= v
2
, let c
1
= 1 and c
2
= 1. Then c
1
v
1
+ c
2
v
2
= v
1
+ (1)v
1
= 0 and c
1
+ c
2
= 0,
which shows that {v
1
, v
2
} is affinely dependent.
14. Let S
1
consist of three (distinct) points on a line through the origin. The set is affinely dependent
because the third point is on the line determined by the first two points. Let S
2
consist of two
(distinct) points on a line through the origin. By Exercise 13, the set is affinely independent
460 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
because the two points are distinct. (A correct solution should include a justification for the sets
presented.)
15. a. The vectors v
2
v
1
=
1
2
ªº
«»
¬¼
and v
3
v
1
=
3
2
ª
º
«
»
¬
¼
are not multiples and hence are linearly
independent. By Theorem 5, S is affinely independent.
b.
()
()
12
695 11
888 22
,, , 0,, ,↔− ↔pp
()()
34
5657
14 1
888 888
,, , ,,↔−− ↔pp
,
()
5
11
5488
,,p
c. p
6
is (, , +), p
7
is (0, +, ), and p
8
is (+, +, ).
16. a. The vectors v
2
v
1
=
1
4
ªº
«»
¬¼
and v
3
v
1
=
4
2
ª
º
«
»
¬
¼
are not
multiples and hence are linearly independent. By
Theorem 5, S is affinely independent.
b.
55103
24 2 22
12 3
777 7 7 7 777
(,,), (, ,), (,,)↔− ↔ − pp p
c.
45
(,,), (,,),+−− ++pp
67
(,,), (,0,).+++ ↔− +pp
See the figure to the right. Actually,
19 3 5 3
212
45
14 14 14 14 14 14
93 3
21
67
14 14 14 2 2
(, , ), (, , ),
(, ,), ( ,0,).
↔−− ↔ −
↔↔
pp
pp
17. Suppose S = {b
1
, …, b
k
} is an affinely independent set. Then (7) has a solution, because p is in
aff S. Hence (8) has a solution. By Theorem 5, the homogeneous forms of the points in S are
linearly independent. Thus (8) has a unique solution. Then (7) also has a unique solution,
because (8) encodes both equations that appear in (7).
The following argument mimics the proof of Theorem 7 in Section 4.4. If S = {b
1
, …, b
k
} is
an affinely independent set, then scalars c
1
, …, c
k
exist that satisfy (7), by definition of aff S.
Suppose x also has the representation
x = d
1
b
1
+ + d
k
b
k
and d
1
+ + d
k
= 1 (7a)
for scalars d
1
, …, d
k
. Then subtraction produces the equation
0 = x x = (c
1
d
1
)b
1
+ + (c
k
d
k
)b
k
( 7 b )
The weights in (7b) sum to zero because the c’s and the d’s separately sum to one. This is
impossible, unless each weight in (8) is zero, because S is an affinely independent set. This
proves that c
i
= d
i
for i = 1, …, k.
18. Let
.
x
y
z
ªº
«»
=«»
«»
¬¼
p
Then
00 0
0010
00 0
xa
xyz xyz
yb
abc abc
zc
ªº ªº ªº ªº ªº
§·
«» «» «» «» «»
=+++−−
¨¸
«» «» «» «» «»
©¹
«» «» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼ ¬¼
So the barycentric coordi-
nates are x/a, y/b, z/c, and 1 x/a y/b z/c. This holds for any nonzero choices of a, b, and c.
19. If {p
1
, p
2
, p
3
} is an affinely dependent set, then there exist scalars c
1,
c
2
, and c
3
, not all zero, such
that c
1
p
1
+ c
2
p
2
+ c
3
p
3
= 0 and c
1
+ c
2
+ c
3
= 0. But then, applying the transformation f,
p
2
v
3
v
1
v
2
p
1
p
3
p
4
p
5
!
p
6
p
7
8.2 Solutions 461
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
11 2 2 3 3 112233
() ( ) () ( ) ()cf c f cf f c c c f++ =++==ppp ppp00
,
since f is linear. This shows that
123
{( ), ( ), ( )}ff fpp p
is also affinely dependent.
20. If the translated set {p
1
+ q, p
2
+ q, p
3
+ q} were affinely dependent, then there would exist real
numbers c
1,
c
2
, and c
3
, not all zero and with c
1
+ c
2
+ c
3
= 0, such that
c
1
(p
1
+ q) + c
2
(p
2
+ q) + c
3
(p
3
+ q) = 0.
But then,
c
1
p
1
+ c
2
p
2
+ c
3
p
3
+ (c
1
+ c
2
+ c
3
)q = 0.
Since c
1
+ c
2
+ c
3
= 0, this implies c
1
p
1
+ c
2
p
2
+ c
3
p
3
= 0, which would make {p
1
, p
2
, p
3
} affinely
dependent. But {p
1
, p
2
, p
3
} is affinely independent, so the translated set must in fact be affinely
independent, too.
21. Let
11 1
22 2
,, and .
ab c
ab c
ªº ªº ªº
== =
«» «» «»
¬¼ ¬¼ ¬¼
ab c
Then det
[aɫ bɫ cɫ] =
111 12
222 12
12
1
det det 1
111 1
abc aa
abc bb
cc
ª
ºª º
«
»« »
=
«
»« »
«
»« »
¬
¼¬ ¼
,
by using the transpose property of the determinant (Theorem 5 in Section 3.2). By Exercise 30 in
Section 3.3, this determinant equals 2 times the area of the triangle with vertices at a, b, and c.
22. If p is on the line through a and b, then p is an affine combination of a and b, so pɫ is a linear
combination of aɫ and bɫ. Thus the columns of [aɫ bɫ pɫ] are linearly dependent. So the determinant
of this matrix is zero.
23. If [aɫ bɫ cɫ]
r
s
t
ªº
«»
«»
«»
¬¼
= pɫ, then Cramer’s rule gives r = det
[pɫ bɫ cɫ] /
det
[aɫ bɫ cɫ]. By Exercise 21, the
numerator of this quotient is twice the area of pbc, and the denominator is twice the area of abc.
This proves the formula for r. The other formulas are proved using Cramer’s rule for s and t.
24. Let p = (1 x)q + x
a, where q is on the line segment from b to c. Then, because the determinant
is a linear function of the first column when the other columns are fixed (Section 3.2),
det
[pɫ bɫ cɫ] = det
[(1 x)qɫ + x
aɫ bɫ cɫ] = (1 x)
·
det
[qɫ bɫ cɫ] + x
·
det
[aɫ bɫ cɫ]
Now, [qɫ bɫ cɫ] is a singular matrix because qɫ is a linear combination of bɫ and cɫ. So det
[qɫ bɫ cɫ] =
0 and det
[pɫ bɫ cɫ] = x
·
det
[aɫ bɫ cɫ] .
462 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
8.3 SOLUTIONS
Notes:
The notion of convexity is introduced in this section and has important applications in computer
graphics. Bézier curves are introduced in Exercises 21-24 explored in greater detail in Section 8.6.
1. The set 0:0 1Vy
y
½
ªº
°°
=<
®¾
«»
°°
¬¼
¯¿
is the vertical line segment from (0,0) to
(0,1) that includes (0,0) but not (0,1). The convex hull of S includes
each line segment from a point in V to the point (2,0), as shown in the
figure. The dashed line segment along the top of the shaded region
indicates that this segment is not in conv S, because (0,1) is not in S.
2. a. Conv S includes all points p of the form
1/ 2 1/ 2 ( 1/ 2)
(1 ) 21/ 2(21/)
xtx
tt
xtx
+
ªºªºª º
=+=
«»«»« »
−−
¬¼¬¼¬ ¼
p
, where
1/ 2 and 0 1.xt≥≤
Notice that if t=a/x, then
2
1/ 2 /(2 )
() 22/ /
aa x
xax ax
+
ªº
=«»
−−
¬¼
p
and
1/ 2
lim ( ) 2
x
a
x
→∞
+
ª
º
=
«
»
¬
¼
p
, establishing that there are points
arbitrarily close to the line y=2 in conv S. Since the curve y=1/x is in S, the line segments
between y=2 and y=1/x are also included in conv S, whenever
1/ 2.x
b. Recall that for any integer n,
()
sin( 2 ) sinxn x
π
+=
. Then
22
(1 ) conv S.
sin( ) sin( 2 ) sin( )
xxnxnt
tt
xxn x
ππ
π
++
ªºª ºª º
=+=
«»« »« »
+
¬¼¬ ¼¬ ¼
p
Notice that sin(x) is always a number between -1 and 1. For a
fixed x and any real number r, an integer n and a number t (with
01t≤≤
) can be chosen so that
2.rx nt
π
=+
c. Conv S includes all points p of the form
0
(1 ) 0
xx
tt t
xx
ª
ºªº
ªº
=+=
«
»«»
«»
¬¼
¬
¼¬¼
p
, where
0 and 0 t 1.x≥≤
Letting t=a/x,
lim 0
x
a
→∞
ªº
=«»
¬¼
p
establishing that there are point arbitrarily close to y=0 in the set.
3. From Exercise 5, Section 8.1,
a. p
1
= 3b
1
b
2
b
3
conv S since some of the coefficients are negative.
b. p
2
= 2b
1
+ 0b
2
+ b
3
conv S since the coefficients do not sum to one.
c. p
3
= – b
1
+ 2b
2
+ 0b
3
conv S since some of the coefficients are negative.
"! #
"
"! #!
"
"
$!
8.3 Solutions 463
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
4. From Exercise 5, Section 8.1,
a. p
1
= – 4b
1
+2b
2
+3b
3
conv S since some of the coefficients are negative.
b. p
2
= .2b
1
+ .5b
2
+ .3b
3
conv S since the coefficients are nonnegative and sum to one.
c. p
3
= b
1
+ b
2
+ b
3
conv S since the coefficients do not sum to one.
5. Row reduce the matrix
[]
123412
vv vvpp

  
to obtain the barycentric coordinates
p
1
=
11 2 1
123 4
63 3 6
+++vv v v
, so p
1
conv S, and p
2
=
11 1 1
12 3 4
33 6 6
+++vv v v
, so p
2
conv S.
6. Let W be the subspace spanned by the orthogonal set S = {v
1
, v
2
, v
3
}. As in Example 1, the
barycentric coordinates of the points p
1
, …, p
4
with respect to S are easy to compute, and they
determine whether or not a point is in Span S, aff S, or conv S.
a.
13
11 1 2
1123
11 2 2 3 3
proj
W
⋅⋅
=+ +
⋅⋅ ⋅
pv
pv pv
pvvv
vv v v vv
31
2
5
2
1
202
2
021
11
120
22
212
ªº
ªº ªº ªº «»
«» «» «»
«»
«» «» «»
=+= =
«»
«» «» «»
«»
«» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼
p
This shows that p
1
is in W = Span S. Also, since the coefficients sum to 1, p
1
is an aff S.
However, p
1
is not in conv S, because the coefficients are not all nonnegative.
b. Similarly,
99 9
44 2
21231232
11 1
proj 99 9 442
W
=++ =++=pvvvvvvp
. This shows that p
2
lies in Span S. Also, since the coefficients sum to 1, p
2
is in aff S. In fact, p
2
is in conv S,
because the coefficients are also nonnegative.
c.
31231233
9918
proj 2
999
W=+=+=pvvvvvvp
. Thus p
3
is in Span S. However, since
the coefficients do not sum to one, p
3
is not in aff S and certainly not in conv S.
d.
41234
68 8
proj 999
W=++ pvvvp
. Since
4
proj
W
p
is the closest point in Span S to p
4
, the
point p
4
is not in Span S. In particular, p
4
cannot be in aff S or conv S.
7.
1231234
1242320
,,,,,,
0311202
ªº ªº ªº ªº ªº ªº ªº
=======
«» «» «» «» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼ ¬¼ ¬¼ ¬¼
vvvpppp
, T = {v
1
, v
2
, v
3
}
a. Use an augmented matrix (with four augmented columns) to write the homogeneous forms of
p
1
, …, p
4
in terms of the homogeneous forms of v
1
, v
2
, and v
3
, with the first step interchanging
rows 1 and 3:
464 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1231234
111
322
3
11 1
62 4 4
3
11 1
22 4 4
100 0
1111111
~1242320~010
0311202 001
ª
º
ªº
«
»
«»
ªº
«
»
−−
¬¼
«»
«
»
«»
¬¼
«
»
¬
¼
vv vpppp
   
The first four columns reveal that
11 1
1
123
36 2
++=vv vp
 
and
11 1
1231
36 2
++=vv vp
. Thus column 4
contains the barycentric coordinates of p
1
relative to the triangle determined by T. Similarly,
column 5 (as an augmented column) contains the barycentric coordinates of p
2
, column 6
contains the barycentric coordinates of p
3
, and column 7 contains the barycentric coordinates of
p
4
.
b. p
3
and p
4
are outside conv T, because in each case at least one of the barycentric coordinates is
negative. p
1
is inside conv T, because all of its barycentric coordinates are positive. p
2
is on the
edge
12
vv
of conv T, because its its barycentric coordinates are nonnegative and its first
coordinate is 0.
8. a. The barycentric coordinates of p
1
, p
2
, p
3
, and p
4
are, respectively,
()
3
12 2
13 13 13
,, ,
()
83
2
13 13 13
,, ,
()
21
33
,0, , and
()
95
1
13 13 13
,,.
b. The point p
1
and p
4
are outside conv T since they each have a negative coordinate. The point p
2
is
inside conv T since the coordinates are positive, and p
3
is on the edge
13
vv
of conv T.
9. The points p
1
and p
3
are outside the tetrahedron conv S since their barycentric coordinates contain
negative numbers. The point p
2
is on the face containing the vertices v
2
, v
3
, and v
4
since its first
barycentric coordinate is zero and the rest are positive. The point p
4
is inside conv S since all its
barycentric coordinates are positive. The point p
5
is on the edge between v
1
and v
3
since the first and
third barycentric coordinates are positive and the rest are zero.
10. The point q
1
is inside conv S because the barycentric coordinates are all positive. The point q
2
is
outside conv S because it has one negative barycentric coordinate. The point q
4
is outside conv S for
the same reason. The point q
3
is on the edge between v
2
and v
3
because
()
31
44
0, , , 0
shows that q
3
is
a convex combination of v
2
and v
3
. The point q
5
is on the face containing the vertices v
1
, v
2
, and v
3
because
()
111
333
,,,0
shows that q
5
is a convex combination of those vertices.
11. a. False. In order for y to be a convex combination, the c’s must also all be nonnegative. See the
definition at the beginning of this section.
b. False. If S is convex, then conv S is equal to S. See Theorem 7.
c. False. For example, the union of two distinct points is not convex, but the individual points are.
12. a. True. See the definition prior to Theorem 7.
b. True. Theorem 9.
c. False. The points do not have to be distinct. For example, S might consist of two points in
5
. A
point in conv S would be a convex combination of these two points. Caratheodory’s Theorem
requires n + 1 or fewer points.
8.3 Solutions 465
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
13. If p, q f
(S), then there exist r, s S such that f
(r) = p and f
(s) = q. The goal is to show that the
line segment y = (1 t)p + t
q, for 0 t 1, is in f
(S). Since
f
is linear,
y = (1 t)p + t
q = (1 t)
f
(r) + t
f
(s) = f
((1 t)r + t
s)
Since S is convex, (1 t)r + t
s S for 0 t 1. Thus y f (S) and f
(S) is convex.
14. Suppose r, s S and 0 t 1. Then, since f is a linear transformation,
f
[(1 t)r + t
s ] = (1 t)
f
(r) + t
f
(s)
But
f
(r) T and f
(s) T, so (1 t)
f
(r) + t
f
(s) T since T is a convex set. It follows that
(1 t)r + t
s S, because S consists of all points that f maps into T. This shows that S is convex.
15. It is straightforward to confirm the equations in the problem: (1)
11 1 1
1234
33 6 6
+++ =vv v vp
and
(2) v
1
v
2
+ v
3
v
4
= 0. Notice that the coefficients of v
1
and v
3
in equation (2) are positive. With
the notation of the proof of Caratheodory’s Theorem, d
1
= 1 and d
3
= 1. The corresponding
coefficients in equation (1) are
1
13
c=
and
1
36
c=
. The ratios of these coefficients are
1
11 3
/cd=
and
1
33 6
/cd=
. Use the smaller ratio to eliminate v
3
from equation (1). That is, add
1
6
times
equation (2) to equation (1):
11 11 11 11 1 1 1
1234124
36 36 66 66 6 2 3
()()()()=++ +++ = + +pvv v vvvv
To obtain the second combination, multiply equation (2) by –1 to reverse the signs so that d
2
and d
4
become positive. Repeating the analysis with these terms eliminates the v
4
term resulting in
11 1
123
26 3
=+ +pv v v
.
16.
1234
103 11
,,, ,
031 12
ªº ªº ªº ªº ªº
=====
«» «» «» «» «»
¬¼ ¬¼ ¬¼ ¬¼ ¬¼
vvvv p
It is straightforward to confirm the
equations in the problem: (1)
72 37
11
1234
121 121 121 11
+++=vv vvp
and (2) 10v
1
– 6v
2
+ 7v
3
– 11v
4
=
0.
Notice that the coefficients of v
1
and v
3
in equation (2) are positive. With the notation of the proof of
Caratheodory’s Theorem, d
1
= 10 and d
3
= 7. The corresponding coefficients in equation (1) are
1
1121
c=
and
37
3121
c=
. The ratios of these coefficients are
11
11 121 1210
/10cd=
and
37 37
33 121 847
/7cd=
. Use the smaller ratio to eliminate v
1
from equation (1). That is, add
1
1210
times equation (2) to equation (1):
10 72 6 37 7 3 3
11111
1234234
121 1210 121 1210 121 1210 11 1210 5 10 10
()()()()=++ +++ = + +pvv vvvvv
To obtain the second combination, multiply equation (2) by –1 to reverse the signs so that d
2
and d
4
become positive. Repeating the analysis with these terms eliminates the v
4
term resulting in
10 72 6 37 7 6
111114
1234123
121 121 121 121 121 121 11 121 11 11 11
()()()()=+ +++ +=++pvv vvvvv
466 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
17. Suppose A B, where B is convex. Then, since B is convex, Theorem 7 implies that B contains
all convex combinations of points of B. Hence B contains all convex combinations of points of A.
That is, conv A B.
18. Suppose A B. Then A B conv B. Since conv B is convex, Exercise 17 shows that
conv A conv B.
19 a. Since A (A B), Exercise 18 shows that conv A conv (A B). Similarly,
conv B conv (A B). Thus, [(conv A) (conv B)] conv (A B).
b. One possibility is to let A be two adjacent corners of a square and B be the other two corners.
Then (conv A) (conv B) consists of two opposite sides of the square, but conv (A B) is the
whole square.
20. a. Since (A B) A, Exercise 18 shows that conv (A B) conv A. Similarly,
conv (A B) conv B. Thus, conv (A B) [(conv A) (conv B)].
b. One possibility is to let A be a pair of opposite vertices of a square and let B be the other pair
of opposite vertices. Then conv A and conv B are intersecting diagonals of the square.
A B is the empty set, so conv (A B) must be empty, too. But conv A conv B contains
the single point where the diagonals intersect. So conv (A B) is a proper subset of conv A
conv B.
21. 22.
!
!
23. g(t) = (1 t)f
0
(t) + t
f
1
(t)
= (1 t)[(1 t)p
0
+ t
p
1
] + t[(1 t)p
1
+ t
p
2
] = (1 t)
2
p
0
+ 2t(1 t)p
1
+ t
2
p
2
.
The sum of the weights in the linear combination for g is (1 t)
2
+ 2t(1 t) + t
2
, which equals
(1 2t + t
2
) + (2t 2t
2
) + t
2
= 1. The weights are each between 0 and 1 when 0 t 1, so g(t) is in
conv{p
0
, p
1
, p
2
}.
24. h(t) = (1 t)g
1
(t) + t
g
2
(t). Use the representation for g
1
(t) from Exercise 23, and the analogous
representation for g
2
(t), based on the control points p
1
, p
2
, and p
3
, and obtain
h(t) = (1 t)[(1 t)
2
p
0
+ 2t(1 t)p
1
+ t
2
p
2
] + t
[(1 t)
2
p
1
+ 2t(1 t)p
2
+ t
2
p
3
]
= (1 t)
3
p
0
+ 2t(1 2t + t
2
)p
1
+ (t
2
t
3
)p
2
+ t
(1 2t + t
2
)
p
1
+ 2t
2
(1 t)p
2
+ t
3
p
3
= (1 3t + 3t
2
t
3
)p
0
+ (2t 4t
2
+ 2t
3
)p
1
+ (t
2
t
3
)p
2
+ (t 2t
2
+ t
3
)p
1
+ (2t
2
2t
3
)p
2
+ t
3
p
3
= (1 3t + 3t
2
t
3
)p
0
+ (3t 6t
2
+ 3t
3
)p
1
+ (3t
2
3t
3
)p
2
+ t
3
p
3
By inspection, the sum of the weights in this linear combination is 1, for all t. To show that the
weights are nonnegative for 0 t 1, factor the coefficients and write
!
"
!
!
#
!
!
$
!
()
1
02
f!
()
1
12
f
()
1
2
g!
!
"
!
#
!
$
!
()
3
04
f!
()
3
14
f
()
3
4
g
8.4 Solutions 467
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
h(t) = (1 t)
3
p
0
+ 3t(1 t)
2
p
1
+ 3t
2
(1 t)p
2
+ t
3
p
3
for 0 t 1
Thus, h(t) is in the convex hull of the control points p
0
, p
1
, p
2
, and p
3
.
8.4 SOLUTIONS
Notes:
In this section lines and planes are generalized to higher dimensions using the notion of
hyperplanes. Important topological ideas such as open, closed, and compact sets are introduced.
1. Let
12
13
and .
41
ªº ªº
==
«» «»
¬¼ ¬¼
vv
Then
21
314
.
14 3
ª
ºªºª º
==
«
»«»« »
¬
¼¬¼¬ ¼
vv
Choose n to be a vector orthogonal to
21
vv
, for example let
3
4
ªº
=«»
¬¼
n
. Then f
(x
1
, x
2
) = 3x
1
+ 4x
2
and d = f
(v
1
) = 3(–1) + 4(4) = 13.
This is easy to check by verifying that f
(v
2
) is also 13.
2. Let
12
12
and .
41
ªº ª º
==
«» « »
¬¼ ¬ ¼
vv
Then
21
21 3
.
14 5
−−
ª
ºªºª º
==
«
»«»« »
−−
¬
¼¬¼¬ ¼
vv
Choose n to be a vector orthogonal
to
21
vv
, for example let
5
3
ªº
=«»
¬¼
n
. Then f
(x
1
, x
2
) = 5x
1
3x
2
and d = f
(v
1
) = 5(1) 3(4) =
7
.
3. a. The set is open since it does not contain any of its boundary points.
b. The set is closed since it contains all of its boundary points.
c. The set is neither open nor closed since it contains some, but not all, of its boundary points.
d. The set is closed since it contains all of its boundary points.
e. The set is closed since it contains all of its boundary points.
4. a. The set is closed since it contains all of its boundary points.
b. The set is open since it does not contain any of its boundary points.
c. The set is neither open nor closed since it contains some, but not all, of its boundary points.
d. The set is closed since it contains all of its boundary points.
e. The set is open since it does not contain any of its boundary points.
5. a. The set is not compact since it is not closed, however it is convex.
b. The set is compact since it is closed and bounded. It is also convex.
c. The set is not compact since it is not closed, however it is convex.
d. The set is not compact since it is not bounded. It is not convex.
e. The set is not compact since it is not bounded, however it is convex.
6. a. The set is compact since it is closed and bounded. It is not convex.
b. The set is not compact since it is not closed. It is not convex.
468 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
c. The set is not compact since it is not closed, however it is convex.
d. The set is not compact since it is not bounded. It is convex.
!!"# The set is not compact since it is not closed. It is not convex.
7. a. Let
12 3
12 1
1, 4, 2,
31 5
a
b
c
ªº ªº ª º ªº
«» «» « » «»
== ==
«» «» « » «»
«» «» « » «»
¬¼ ¬¼ ¬ ¼ ¬¼
vv v n
and compute the translated points
21 31
12
3, 3
22
ª
ºªº
«
»«»
==
«
»«»
«
»«»
¬
¼¬¼
vv vv
.
To solve the system of equations (v
2
v
1
) · n = 0 and (v
3
v
1
) · n = 0, reduce the augmented
matrix for a system of two equations with three variables.
[1 3 2] 0,
a
b
c
ªº
«»
=
«»
«»
¬¼
[2 3 2] 0
a
b
c
ªº
«»
−− =
«»
«»
¬¼
.
Row operations show that
1320 1000
~
2320 0320
ªºªº
«»«»
−− −
¬¼¬¼
. A suitable normal vector
is
0
2
3
ªº
«»
=«»
«»
¬¼
n
.
b. The linear functional is
123 2 3
(, , ) 2 3fxx x x x=+
, so d = f
(1, 1, 3) = 2 + 9 = 11. As a check,
evaluate f at the other two points on the hyperplane: f
(2, 4, 1) = 8 + 3 = 11 and
f
(–1 , –2 , 5) = – 4 + 15 = 11.
8. a. Find a vector in the null space of the transpose of [v
2
-v
1
v
3
-v
1
]. For example, take
4
3.
6
ªº
«»
=«»
«»
¬¼
n
b. f
(x) = 4x
1
+ 3x
2
6x
3
, d = f
(v
1
) =8
9. a. Find a vector in the null space of the transpose of [v
2
-v
1
v
3
-v
1
v
4
-v
1
]. For example, take
n =
3
1.
2
1
ªº
«»
«»
«»
«»
¬¼
b. f
(x) = 3x
1
x
2
+ 2x
3
+ x
4
, d = f
(v
1
) =5
#
8.4 Solutions 469
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
10. a. Find a vector in the null space of the transpose of [v
2
-v
1
v
3
-v
1
v
4
-v
1
]. For example, take
n =
2
3.
5
1
ªº
«»
«»
«»
«»
¬¼
b. f
(x) = 2x
1
+ 3x
2
5x
3
+ x
4
, d = f
(v
1
) = 4
11.
12 3
2; 0 2; 5 2; 2 2; 2.==< => =<=np n0 nv nv nv
<< < < <
Hence v
2
is on the same side of H
as 0, v
1
is on the other side, and v
3
is in H.
12. Let H = [ f : d ], where f (x
1
, x
2
, x
3
) = 3x
1
+ x
2
2x
3
. f (a
1
) = 5, f (a
2
) = 4. f (a
3
) =3, f (b
1
) =7,
f (b
2
) = 4, and f (b
3
) = 6. Choose d = 4 so that all the points in A are in or on one side of H and all
the points in B are in or on the other side of H. There is no hyperplane parallel to H that strictly
separates A and B because both sets have a point at which f takes on the value of 4. There may be
(and in fact is) a hyperplane that is not parallel to H that strictly separates A and B.
13. H
1
= {x : n
1
x = d
1
} and H
2
= {x : n
2
x = d
2
}. Since p
1
H
1
, d
1
= n
1
p
1
= 4. Similarly,
d
2
= n
2
p
2
= 22. Solve the simultaneous system [1 2 4 2]x = 4 and [2 3 1 5]x = 22:
12424 1 0 10 4 32
~
231522 0 1 7 114
ªºª º
«»« »
−−
¬¼¬ ¼
The general solution provides one set of vectors, p, v
1
, and v
2
. Other choices are possible.
34 3142
32 10 4
14 7 1 ,
010
00 1
xx xx
ªº ªº ªº
«» «» «»
−−
«» «» «»
=+ + =++
«» «» «»
«» «» «»
¬¼ ¬¼ ¬¼
xpvv
where
12
32 10 4
14 7 1
,,
010
00 1
ª
ºªº ªº
«
»«» «»
−−
«
»«» «»
===
«
»«» «»
«
»«» «»
¬
¼¬¼ ¬¼
pvv
Then H
1
H
2
= {x : x = p + x
3
v
1
+ x
4
v
2
}.
14. Since each of F
1
and F
2
can be described as the solution sets of A
1
x=b
1
and A
2
x=b
2
respectively,
where A
1
and A
2
have rank 2, their intersection is described as the solution set to
11
22
A
A
ªº ªº
=
«» «»
¬¼ ¬¼
b
xb
.
Since
1
2
2rank 4
A
A
§·
ªº
≤≤
¨¸
«»
¬¼
©¹
, the solution set will have dimensions 6 2=4,6 3=3, or 6 4=2.
15. f (x
1
, x
2
, x
3
) =Ax= x
1
– 3x
2
+ 4x
3
– 2x
4
and d =b= 5
16. f (x
1
, x
2
, x
3
) = Ax= 2x
1
+ 5x
2
– 3x
3
+ 6x
5
and d = b= 0
17. Since by Theorem 3 in Section 6.1, Row B=(Nul B)
, choose a nonzero vector n Nul B . For
example take n=
1
2
1
ªº
«»
«»
«»
¬¼
. Then f (x
1
, x
2
, x
3
) = x
1
– 2x
2
+ x
3
and d = 0
470 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
18. Since by Theorem 3, Section 6.1, Row B=(Nul B)
, choose a nonzero vector n Nul B . For
example take n=
11
4
1
ªº
«»
«»
«»
¬¼
. Then f (x
1
, x
2
, x
3
) = –11x
1
+ 4x
2
+ x
3
and d = 0
19. Theorem 3 in Section 6.1 says that (Col B)
= Nul B
T
. Since the two columns of B are clearly linear
independent, the rank of B is 2, as is the rank of B
T
. So dim Nul B
T
= 1, by the Rank Theorem, since
there are three columns in B
T
. This means that Nul B
T
is one-dimensional and any nonzero vector n
in Nul B
T
will be orthogonal to H and can be used as its normal vector. Solve the linear system
B
T
x = 0 by row reduction to find a basis for Nul B
T
:
1470 1050
~
0260 0130
ªºªº
«»«»
−−
¬¼¬¼
5
3
1
ª
º
«
»
=
«
»
«
»
¬
¼
n
!Now, let f
(x
1
, x
2
, x
3
) = –5x
1
+ 3x
2
+ x
3
. Since the hyperplane H is a subspace, it goes through the
origin and d must be 0.
The solution is easy to check by evaluating f at each of the columns of B.
20. Since by Theorem 3, Section 6.1, Col B=(Nul B
T
)
, choose a nonzero vector n Nul B
T
. For
example take n=
6
2
1
ªº
«»
«»
«»
¬¼
. Then f (x
1
, x
2
, x
3
) = – 6x
1
+ 2x
2
+ x
3
and d = 0
21. a. False. A linear functional goes from
n
to . See the definition at the beginning of this section.
b. False. See the discussion of (1) and (4). There is a 1×n matrix A such that f (x) = Ax for all x in
n
. Equivalently, there is a point n in
n
such that f (x) = n x for all x in
n
.
c. True. See the comments after the definition of strictly separate.
d. False. See the sets in Figure 4.
22. a. True. See the statement after (3).
b. False. The vector n must be nonzero. If n = 0, then the given set is empty if d 0 and the set
is all of
n
if d = 0.
c. False. Theorem 12 requires that the sets A and B be convex. For example, A could be the
boundary of a circle and B could be the center of the circle.
d. False. Some other hyperplane might strictly separate them. See the caution at the end of
Example 8.
23. Notice that the side of the triangle closest to p is
23
vv
. A vector orthogonal to
23
vv
.is n=
3
2
ªº
«»
¬¼
.
Take f (x
1
, x
2
) = 3x
1
– 2x
2
. Then f (v
2
) = f (v
3
)=9 and f (p)=10 so any d satisfying 9 < d < 10 will
work. There are other possible answers.
!
8.4 Solutions 471
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
24. Notice that the side of the triangle closest to p is
13
vv
A vector orthogonal to
13
vv
is n=
2
3
ªº
«»
¬¼
.
Take f (x
1
, x
2
) = 2x
1
+3x
2
. Then f (v
1
) = f (v
3
)=4 and f (p)=5 so any d satisfying 4 < d < 5 will
work. There are other possible answers.
25. Let L be the line segment from the center of B(0, 3) to the center of B(p, 1). This is on the line
through the origin in the direction of p. The length of L is (4
2
+ 1
2
)
1/2
4.1231. This exceeds the
sum of the radii of the two disks, so the disks do not touch. If the disks did touch, there would be no
hyperplane (line) strictly separating them, but the line orthogonal to L through the point of tangency
would (weakly) separate them. Since the disks are separated slightly, the hyperplane need not be
exactly perpendicular to L, but the easiest one to find is a hyperplane H whose normal vector is p.
So define f by f
(x) = p
x.
To find d, evaluate
f
at any point on L that is between the two disks. If the disks were tangent, that
point would be three-fourths of the distance between their centers, since the radii are 3 and 1. Since
the disks are slightly separated, the distance is about 4.1231. Three-fourths of this distance is greater
than 3, and one-fourth of this distance is greater than 1. A suitable value of d is f
(q), where q =
(.25)0 + (.75)p = (3, .75). So d = p
q = 4(3) + 1(.75) = 12.75.
26. The normal to the separating hyperplane has the direction of the line segment between p and q. So,
let n = p q =
4
2
ªº
«»
¬¼
. The distance between p and q is
20
, which is more than the sum of the radii
of the two balls. The large ball has center q. A point three-fourths of the distance from q to p will be
greater than 3 units from q and greater than 1 unit from p. This point is
x = .75p + .25q =
625.0
.75 .25
131.5
ªº ªº ª º
+=
«» «» « »
¬¼ ¬¼ ¬ ¼
Compute n
x = 17. The desired hyperplane is :4 2 17
xxy
y
½
ªº
°
°
=
®
¾
«»
°
°
¬¼
¯¿
.
27. Exercise 2(a) in Section 8.3 gives one possibility. Or let S = {(x, y) : x
2
y
2
= 1 and y > 0}. Then
conv S is the upper (open) half-plane.
28. One possibility is B = {(x, y) : x
2
y
2
= 1 and y > 0} and A = {(x, y) : |
x
| 1 and y = 0}.
29. Let x, y B(
p,
δ
) and suppose z = (1 t)
x + t
y, where 0 t 1. Then
||z p|| = ||
[(1 t)
x + t
y] p|| = ||(1 t)(x p) + t
(y p)||
(1 t)
||x p|| + t
||y p|| < (1 t)
δ
+ t
δ
=
δ
where the first inequality comes from the Triangle Inequality (Theorem 17 in Section 6.7) and the
second inequality follows from x, y B(
p,
δ
). It follows that z B(
p,
δ
) and B(
p,
δ
) is convex.
30. Let S be a bounded set. Then there exists a
δ
> 0 such that S
B(0,
δ
). But B(0,
δ
) is
convex by Exercise 29, so Theorem 9 in Section 8.3 (or Exercise 17 in Section 8.3) implies that
conv S
B(
p,
δ
) and conv S is bounded.
472 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
8.5 SOLUTIONS
Notes:
A polytope is the convex hull of a finite number of points. Polytopes and simplices are important
in linear programming, which has numerous applications in engineering design and business
management. The behavior of functions on polytopes is studied in this section.
1. Evaluate each linear functional at each of the three extreme points of S. Then select the extreme
point(s) that give the maximum value of the functional.
a. f
(p
1
) = 1, f
(p
2
) = –1, and f
(p
3
) = –3, so m = 1 at p
1
.
b. f
(p
1
) = 1, f
(p
2
) = 5, and f
(p
3
) = 1, so m = 5 at p
2
.
c. f
(p
1
) = –3, f
(p
2
) = –3, and f
(p
3
) = 5, so m = 5 at p
3
"!
!"# Evaluate each linear functional at each of the three extreme points of S. Then select the point(s) that
give the maximum value of the functional.
a. f
(p
1
) = 1, f
(p
2
) = 3, and f
(p
3
) = 3, so m = 3 on the set conv
{p
2
, p
3
}.
b. f
(p
1
) = 1, f
(p
2
) = 1, and f
(p
3
) = 1, so m = 1 on the set conv
{p
1
, p
2
}.
c. f
(p
1
) = –1, f
(p
2
) = –3, and f
(p
3
) = 0, so m = 0 at p
3
"!
3. Evaluate each linear functional at each of the three extreme points of S. Then select the point(s) that
give the minimum value of the functional.
a. f
(p
1
) = 1, f
(p
2
) = –1, and f
(p
3
) = –3, so m = –3 at the point p
3
b. f
(p
1
) = 1, f
(p
2
) = 5, and f
(p
3
) = 1, so m = 1 on the set conv
{p
1
, p
3
}.
c. f
(p
1
) = –3, f
(p
2
) = –3, and f
(p
3
) = 5, so m = –3 on the set conv
{p
1
, p
2
}.
4. Evaluate each linear functional at each of the three extreme points of S. Then select the point(s) that
give the maximum value of the functional.
a. f
(p
1
) = 1, f
(p
2
) = 3, and f
(p
3
) = 3, so m = –1 at the point p
1
.
b. .f
(p
1
) = 1, f
(p
2
) = 1, and f
(p
3
) = 1, so m = –1 at the point p
3
.
c. f
(p
1
) = –1, f
(p
2
) = –3, and f
(p
3
) = 0, so m = –3 at the point p
2
.
5. The two inequalities are (a) x
1
+ 2x
2
10 and (b) 3x
1
+ x
2
15. Line (a) goes from (0,5) to (10,0).
Line (b) goes from (0,15) to (5,0). One vertex is (0,0). The x
1
-intercepts (when x
2
= 0) are 10 and 5,
so (5,0) is a vertex. The x
2
-intercepts (when x
1
= 0) are 5 and 15, so (0,5) is a vertex. The two lines
intersect at (4,3) so (4,3) is a vertex. The minimal representation is 0540
,,,
0035
½
ª
ºªºªºªº
°
°
®
¾
«
»«»«»«»
°
°
¬
¼¬¼¬¼¬¼
¯¿
8.5 Solutions 473
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
6. The two inequalities are (a) 2x
1
+ 3x
2
18 and (b) 4x
1
+ x
2
16. Line (a) goes from (0,6) to (9,0).
Line (b) goes from (0,16) to (4,0). One vertex is (0,0). The x
1
-intercepts (when x
2
= 0) are 9 and 4, so
(4,0) is a vertex. The x
2
-intercepts (when x
1
= 0) are 6 and 16, so (0,6) is a vertex. The two lines
intersect at (3,4) so (3,4) is a vertex. The minimal representation is 0430
,,,
0046
½
ª
ºªºªºªº
°
°
®
¾
«
»«»«»«»
°
°
¬
¼¬¼¬¼¬¼
¯¿
7. The three inequalities are (a) x
1
+ 3x
2
18, (b) x
1
+ x
2
10, and (c) 4x
1
+ x
2
28. Line (a) goes from
(0,6) to (18,0). Line (b) goes from (0,10) to (10,0). And line (c) goes from (0,28) to (7,0). One
vertex is (0,0). The x
1
-intercepts (when x
2
= 0) are 18, 10, and 7, so (7,0) is a vertex. The x
2
-
intercepts (when x
1
= 0) are 6, 10, and 28, so (0,6) is a vertex. All three lines go through (6,4), so
(6,4) is a vertex. The minimal representation is 0760
,,,
0046
½
ª
ºªºªºªº
°
°
®
¾
«
»«»«»«»
°
°
¬
¼¬¼¬¼¬¼
¯¿
.
8. The three inequalities are (a) 2x
1
+ x
2
8, (b) x
1
+ x
2
6, and (c) x
1
+ 2x
2
7. Line (a) goes from
(0,8) to (4,0). Line (b) goes from (0,6) to (6,0). And line (c) goes from (0,3.5) to (7,0). One vertex is
(0,0). The x
1
-intercepts (when x
2
= 0) are 4, 6, and 7, so (4,0) is a vertex. The x
2
-intercepts (when x
1
= 0) are 8, 6, and 3.5, so (0,3.5) is a vertex. All three lines go through (3,2), so (3,2) is a vertex. The
minimal representation is 043 0
,,,
0023.5
½
ªºªºªºª º
°°
®¾
«»«»«»« »
°°
¬¼¬¼¬¼¬ ¼
¯¿
9. The origin is an extreme point, but it is not a vertex. It is an
extreme point since it is not in the interior of any line segment
that lies in S. It is not a vertex since the only supporting
hyperplane (line) containing the origin also contains the line
segment from (0,0) to (3,0).
10. One possibility is a ray. It has an extreme point at one end.
11. One possibility is to let S be a square that includes part of the boundary but not all of it. For example,
include just two adjacent edges. The convex hull of the profile P is a triangular region.
12. a. f
0
(S
5
) = 6, f
1
(S
5
) = 15, f
2
(S
5
) = 20, f
3
(S
5
) = 15, f
4
(S
5
) = 6, and 6 15 + 20 15 + 6 = 2.
b.
f
0
f
1
f
2
f
3
f
4
S
1
2
S
2
3 3
S
3
4 6 4
S
4
5 10 10 5
S
5
6 15 20 15 6
!
""#$%!#!!&!
474 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
1
() ,
1
n
k
n
fS k
+
§·
=¨¸
+
©¹
where
!
!( )!
aa
bba b
§·
=
¨¸
©¹
is the binomial coefficient.
13. a. To determine the number of k-faces of the 5-dimensional hypercube C
5
, look at the pattern that is
followed in building C
4
from C
3
. For example, the 2-faces in C
4
include the 2-faces of C
3
and
the 2-faces in the translated image of C
3
. In addition, there are the 1-faces of C
3
that are
“stretched” into 2-faces. In general, the number of k-faces in C
n
equals twice the number of k-
faces in C
n – 1
plus the number of (k – 1)-faces in C
n – 1
. Here is the pattern: f
k
(C
n
) = 2 f
k
(C
n – 1
)
+ f
k – 1
(C
n – 1
). For k = 0, 1, …, 4, and n = 5, this gives f
0
(C
5
) = 32, f
1
(C
5
) = 80, f
2
(C
5
) = 80,
f
3
(C
5
) = 40, and f
4
(C
5
) = 10. These numbers satisfy Euler’s formula since, 32 80 + 80 40 +
10 = 2.
b. The general formula is
!
()2 ,where !( )!
nnk
k
na
a
fC kb
ba b
§· §·
==
¨¸ ¨¸
©¹ ©¹
is the binomial coefficient.!
14. a.!X
1
is a line segment X
2
is a parallelogram
b. f
0
(X
3
) = 6, f
1
(X
3
) = 12, f
2
(X
3
) = 8. X
3
is an octahedron.
c. f
0
(X
4
) = 8, f
1
(X
4
) = 24, f
2
(X
4
) = 32, f
3
(X
4
) = 16, 8 24 + 32 16 = 0
d.
1
()2 ,
1
nk
k
n
fX k
+§·
=¨¸
+
©¹
0 k n
1, where
!
!( )!
aa
bba b
§·
=
¨¸
©¹
is the binomial coefficient.
15. a. f
0
(P
n
) = f
0
(Q) +
1
b. f
k
(P
n
) = f
k
(Q) + f
k
1
(Q)
c. f
n
1
(P
n
) = f
n
2
(Q) +
1
16. a. True. See the definition at the beginning of this section.
b. True. See the definition after Example 1.
c. False. S must be compact. See Theorem 15.
d. True. See the comment after Fig. 7.
17. a. False. It has six facets (faces).
b. True. See Theorem 14.
c. False. The maximum is always attained at some extreme point, but there may be other points that
are not extreme points at which the maximum is attained. See Theorem 16.
d. True. Follows from Euler’s formula with n = 2.
18. Let v be an extreme point of the convex set S and let T = {y S : y v}. If y and z are in T, then
S
yz
since S is convex. But since v is an extreme point of S, v
,yz
so
.Tyz
Thus T is
convex.
!" #
"!
#
"!
#
#!
8.6 Solutions 475
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Conversely, suppose v S, but v is not an extreme point of S. Then there exist y and z in S such
that v
yz
, with v y and v z. It follows that y and z are in T, but
.Tyz
Hence T is not
convex.
19. Let S be convex and let x cS + dS, where c > 0 and d > 0. Then there exist s
1
and s
2
in S such that
x = cs
1
+ ds
2
. But then
12 1 2
()
cd
cd cd
cd cd
§·
=+ =+ +
¨¸
++
©¹
xss s s
.
Now show that the expression on the right side is a member of (c + d)S.
For the converse, pick an typical point in (c + d)S and show it is in cS + dS.
20. For example, let S = {1, 2} in
1
. Then 2S = {2, 4}, 3S = {3, 6} and (2 + 3)S = {5, 10}.
However, 2S + 3S = {2, 4} + {3, 6} = {2 + 3, 4 + 3, 2 + 6, 4 + 6} = {5, 7, 8, 10} (2 + 3)S.
21. Suppose A and B are convex. Let x, y A
+
B. Then there exist a, c A and b, d B such that
x = a + b and y = c + d. For any t such that 0 t 1, we have
[][ ]
(1 ) (1 )( ) ( )
(1 ) (1 )
tt t t
tt tt
=+=++ +
=+++
wxy abcd
ac bd
But (1 t)a + tc A since A is convex, and (1 t)b + td B since B is convex. Thus w is in A + B,
which shows that A + B is convex.
22. a. Since each edge belongs to two facets, kr is twice the number of edges: k
r = 2e. Since each edge
has two vertices, s
v = 2e.
b. v e + r = 2, so
22 11 11
2
2
ee
sse
kk
e+=+=+
c. A polygon must have at least three sides, so k 3. At least three edges meet at each vertex,
so s 3. But both k and s cannot both be greater than 3, for then the left side of the equation
in (b) could not exceed 1!2.
When k = 3, we get
11 1
6
,
se
=
so s = 3, 4, or 5. For these values, we get e = 6, 12, or 30,
corresponding to the tetrahedron, the octahedron, and the icosahedron, respectively.
When s = 3, we get
11 1
6
,
e
k
=
so k = 3, 4, or 5 and e = 6, 12, or 30, respectively.
These values correspond to the tetrahedron, the cube, and the dodecahedron.
8.6 SOLUTIONS
Notes:
This section moves beyond lines and planes to the study of some of the curves that are used to
model surfaces in engineering and computer aided design. Notice that these curves have a matrix
representation.
1. The original curve is x(t) = (1 – t)
3
p
0
+ 3t(1 – t)
2
p
1
+ 3t
2
(1 – t)p
2
+ t
3
p
3
(0 < t < 1). Since the
curve is determined by its control points, it seems reasonable that to translate the curve, one
should translate the control points. In this case, the new Bézier curve y(t) would have the
equation
476 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
y(t) = (1 t)
3
(p
0
+ b) + 3t(1t)
2
(p
1
+ b) + 3t
2
(1 – t)(p
2
+ b) + t
3
(p
3
+ b)
= (1 – t)
3
p
0
+ 3t(1 – t)
2
p
1
+ 3t
2
(1 – t)p
2
+ t
3
p
3
+ (1 – t)
3
b + 3t(1 – t)
2
b + 3t
2
(1 – t)b + t
3
b
A routine algebraic calculation verifies that (1 t)
3
+ 3t(1 t)
2
+ 3t
2
(1 t) + t
3
= 1 for all t.
Thus y(t) = x(t) + b for all t, and translation by b maps a Bézier curve into a Bézier curve.
2. a. Equation (15) reveals that each polynomial weight is nonnegative for 0 < t < 1, since 4 3t >
0. For the sum of the coefficients, use (15) with the first term expanded: 1 3t + 6t
2
t
3
.
The 1 here plus the 4 and 1 in the coefficients of p
1
and p
2
, respectively, sum to 6, while the
other terms sum to 0. This explains the 1/6 in the formula for x(t), which makes the
coefficients sum to 1. Thus, x(t) is a convex combination of the control points for 0 < t < 1.
b. Since the coefficients inside the brackets in equation (14) sum to 6, it follows that
[]
332 32 3
11
66
6(1)(364)(3331)ttt tttt
ª
º
==+++++++
¬
¼
bb b b bb
and hence x(t) + b
may be written in a similar form, with p
i
replaced by p
i
+ b for each i. This shows that
x(t) + b is a cubic B-spline with control points p
i
+ b for i = 0, …, 3.
3. a. x'
(t) = (–3 + 6t – 3t
2
)p
0
+ (3 –12t + 9t
2
)p
1
+ (6t – 9t
2
)p
2
+ 3t
2
p
3
, so x'
(0) = –3p
0
+ 3p
1
=3(p
1
p
0
),
and x'
(1) = –3p
2
+ 3p
3
= 3(p
3
p
2
). This shows that the tangent vector x'
(0) points in the
direction from p
0
to p
1
and is three times the length of p
1
p
0
. Likewise, x'
(1) points in the
direction from p
2
to p
3
and is three times the length of p
3
p
2
. In particular, x'
(1) = 0 if and only
if p
3
= p
2
.
b. x''
(t) = (6 – 6t)p
0
+ (–12 + 18t)p
1
+ (6 – 18t)p
2
+ 6tp
3
, so that
x''
(0) = 6p
0
– 12p
1
+ 6p
2
= 6(p
0
p
1
) + 6(p
2
– p
1
)
and x''
(1) = 6p
1
– 12p
2
+ 6p
3
= 6(p
1
p
2
) + 6(p
3
– p
2
)
For a picture of x''
(0), construct a coordinate system with the origin at p
1
, temporarily, label p
0
as p
0
p
1
, and label p
2
as p
2
p
1
. Finally, construct a line from this new origin through the sum
of p
0
p
1
and p
2
p
1
, extended out a bit. That line points in the direction of x''
(0).
4. a. x'
(t) =
()()()
2222
01 23
1
6
363 912 963 3tt t t tt t
ªº
+++++ +
¬¼
pp pp
x'
(0) =
()
20
1
2
pp
and x'
(1) =
()
31
1
2
pp
(Verify that, in the first part of Fig. 10, a line drawn through p
0
and p
2
is parallel to the
tangent line at the beginning of the B-spline.)
When x'
(0) and x'
(1) are both zero, the figure collapses and the convex hull of the set of
control points is the line segment between p
0
and p
3
, in which case x(t) is a straight line.
Where does x(t) start? In this case,
x(t) =
32 32
03
1
6
(4 6 2) (4 6 4)tt tt
ªº
++ + +
¬¼
pp
x(0) =
12
03
33
+pp
and x(1) =
21
03
33
+pp
0 = p
1
p
0
p
1
p
2
p
1
w
1
01 21 6
()()(0)
′′
=+=wpp pp x
8.6 Solutions 477
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
The curve begins closer to p
3
and finishes closer to p
0
. Could it turn around during its travel?
Since x'
(t) = 2t(1 t)(p
0
p
3
), the curve travels in the direction p
0
p
3
, so when x'
(0) = x'
(1)
= 0, the curve always moves away from p
3
toward p
0
for 0 < t < 1.
b. x''
(t) = (1 – t)p
0
+ (–2 + 3t)p
1
+ (1 – 3t)p
2
+ tp
3
x''
(0) = p
0
– 2p
1
+ p
2
= (p
0
p
1
) + (p
2
p
1
)
and x''
(1) = p
1
– 2p
2
+ p
3
= (p
1
p
2
) + (p
3
p
2
)
For a picture of x''
(0), construct a coordinate system with the origin at p
1
, temporarily, label p
0
as
p
0
p
1
, and label p
2
as p
2
p
1
. Finally, construct a line from this new origin to the sum of p
0
p
1
and p
2
p
1
. That segment represents x''
(0).
For a picture of x''
(1), construct a coordinate system with the origin at p
2
, temporarily, label p
1
as
p
1
p
2
, and label p
3
as p
3
p
2
. Finally, construct a line from this new origin to the sum of
p
1
p
2
and p
3
p
2
. That segment represents x''
(1).
5. a. From Exercise 3(a) or equation (9) in the text,
x'
(1) = 3(p
3
p
2
)
Use the formula for x'(0), with the control points from y(t), and obtain
y'
(0) = –3p
3
+ 3p
4
= 3(p
4
p
3
)
For C
1
continuity, 3(p
3
p
2
) = 3(p
4
p
3
), so p
3
= (p
4
+ p
2
)/2, and p
3
is the midpoint of the
line segment from p
2
to p
4
.
b. If x'
(1) = y'
(0) = 0, then p
2
= p
3
and p
3
= p
4
. Thus, the “line segment” from p
2
to p
4
is just
the point p
3
. [Note: In this case, the combined curve is still C
1
continuous, by definition.
However, some choices of the other control points, p
0
, p
1
, p
5
, and p
6
can produce a curve
with a visible “corner” at p
3
, in which case the curve is not G
1
continuous at p
3
.]
6. a. With x(t) as in Exercise 2,
x(0) = (p
0
+ 4p
1
+ p
2
)/6 and x(1) = (p
1
+ 4p
2
+ p
3
)/6
Use the formula for x(0), but with the shifted control points for y(t), and obtain
y(0) = (p
1
+ 4p
2
+ p
3
)/6
This equals x(1), so the B-spline is G
0
continuous at the join point.
b. From Exercise 4(a),
x'
(1) = (p
3
p
1
)/2 and x'
(0) = (p
2
p
0
)/2
Use the formula for x'
(0) with the control points for y(t), and obtain
y'
(0) = (p
3
p
1
)/2 = x'
(1)
Thus the B-spline is C
1
continuous at the join point.
7. From Exercise 3(b),
x!!
(0) = 6(p
0
p
1
) + 6(p
2
– p
1
) and x!!
(1) = 6(p
1
p
2
) + 6(p
3
– p
2
)
Use x!!
(0) with the control points for y(t), to get
y!!
(0) = 6(p
3
p
4
) + 6(p
5
– p
4
)
p
2
= 0
p
3
p
2
p
1
p
2
w
12 32
()()(1)
′′
=+=wpp pp x
478 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
Set x''
(1) = y''
(0) and divide by 6, to get
(p
1
p
2
) + (p
3
– p
2
) = (p
3
p
4
) + (p
5
– p
4
) (*)
Since the curve is C
1
continuous at p
3
, the point p
3
is the midpoint of the segment from p
2
to p
4
, by
Exercise 5(a). Thus
1
324
2
()=+ppp
, which leads to p
4
– p
3
= p
3
– p
2
. Substituting into (*) gives
(p
1
p
2
) + (p
3
– p
2
) = –(p
3
p
2
) + p
5
– p
4
(p
1
p
2
) + 2(p
3
– p
2
) + p
4
= p
5
Finally, again from C
1
continuity, p
4
= p
3
+ p
3
p
2
. Thus,
p
5
= p
3
+ (p
1
p
2
) + 3(p
3
– p
2
)
So p
4
and p
5
are uniquely!"#$#%&'(#"!)*!!
+
,!!
-
,!.("!!
/
0!!!1(2*!!
3
!4.(!)#!4567#(!.%)'$%.%'2*0!
8. From Exercise 4(b), x!!
(0) = p
0
– 2p
1
+ p
2
and x!!
(1) = p
1
– 2p
2
+ p
3
. Use the formula for x!!
(0), with
the shifted control points for y(t), to get
y!!
(0) = p
1
– 2p
2
+ 2p
3
= x!!
(1)
Thus the curve has C
2
continuity at x(1).
9. Write a vector of the polynomial weights for x(t), expand the polynomial weights and factor the
vector as M
B
u(t):
234
234
2
234
3
34
4
4
14 6 4 1
14 6 4 1
412 12 4 0412124
00 6126
6126
00 0 44
44
00 0 01
tt tt
t
tt tt
t
ttt
t
tt
t
t
ªº
++
ª
º
−−
ªº
«»
«
»
«»
«»
+−−
«
»
«»
«»
«
»
«»
=
+
«»
«
»
«»
«»
«
»
«»
«»
«
»
«»
«»
¬¼
«
»
¬
¼
«»
¬¼
, M
B
=
14 6 4 1
0412124
00 6126
00 0 44
00 0 01
−−
ª
º
«
»
−−
«
»
«
»
«
»
«
»
«
»
¬
¼
10. Write a vector of the polynomial weights for x(t), expand the polynomial weights, taking care to
write the terms in ascending powers of t, and factor the vector as M
S
u(t):
23
23
2
23
3
3
13 3 1
1331
46 3 4 0 6 3
11
1333
66
13 3 3
0001
ttt
t
tt
t
tt t
t
t
ªº
+
ª
º
−−
ªº
«»
«
»
«»
«»
+
«
»
«»
=
«»
«
»
«»
++
«»
«
»
«»
«»
«»
«
»
¬¼
¬
¼
«»
¬¼
= M
S
u(t), M
S
=
1331
4063
1
1333
6
0001
−−
ª
º
«
»
«
»
«
»
«
»
«
»
¬
¼
11. a. True. See equation (2).
b. False. Example 1 shows that the tangent vector x(t) at p
0
is two times the directed line segment
from p
0
to p
1
.
c. True. See Example 2.
12. a. False. The essential properties are preserved under translations as well as linear transformations.
See the comment after Figure 1.
b. True. This is the definition of G
0
continuity at a point.
c. False. The Bézier basis matrix is a matrix of the polynomial coefficients of the control points.
See the definition before equation (4).
13. a. From (12),
111
10 10 1 0
222
()==qq pp p p
. Since
1
001 10
2
, ( )==+qpq pp
.
b. From (13), 8(q
3
q
2
) = –p
0
p
1
+ p
2
+ p
3
. So 8q
3
+ p
0
+ p
1
p
2
p
3
= 8q
2
.
c. Use (8) to substitute for 8q
3
, and obtain
8.6 Solutions 479
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
8q
2
= (p
0
+ 3p
1
+ 3p
2
+ p
3
) + p
0
+ p
1
p
2
p
3
= 2p
0
+ 4p
1
+ 2p
2
Then divide by 8, regroup the terms, and use part (a) to obtain
111 11 11 11
2012 01 12 1 12
424 44 44 24
11
112
22
()()()
(( ))
=++= + + + =+ +
=++
qppp pp pp q pp
qpp
14. a. 3(r
3
r
2
) = z!(1), by (9) with z!(1) and r
j
in place of x!(1) and p
j
.
z!(1) = .5x!(1), by (11) with t = 1.
.5x!(1) = (.5)3(p
3
p
2
), by (9).
b. From part (a), 6(r
3
r
2
) = 3(p
3
p
2
),
11 11
32 3 2 3 3 2 2
22 22
, and .=−−+=rr p p r p p r
Since r
3
= p
3
, this equation becomes
1
232
2()=+rpp
.
c. 3(r
1
r
0
) = z!
(0), by (9) with z!(0) and r
j
in place of x!(0) and p
j
.
z!
(0) = .5x!(.5), by (11) with t = 0.
d. Part (c) and (10) show that 3(r
1
r
0
) =
3
8
(p
0
p
1
+ p
2
+ p
3
). Multiply by
8
3
and rearrange to
obtain 8r
1
= p
0
p
1
+ p
2
+ p
3
+ 8r
0
.
e. From (8), 8r
0
= p
0
+ 3p
1
+ 3p
2
+ p
3
. Substitute into the equation from part (d), and obtain
8r
1
= 2p
1
+ 4p
2
+ 2p
3
. Divide by 8 and use part (b) to obtain
()
11 1 11 1 11 1
112 3 12 23 12 2
42 4 44 4 22 2
() ()=+ + = + + += ++rpp p p p pp pp r<
Interchange the terms on the right, and obtain r
1
=
11
212
22
[( )]++rpp
.
15. a. From (11), y!(1) = .5x!(.5) = z!(0).
b. Observe that y!(1) = 3(q
3
q
2
). This follows from (9), with y(t) and its control points in
place of x(t) and its control points. Similarly, for z(t) and its control points, z!(0) = 3(r
1
r
0
).
By part (a) 3(q
3
q
2
) = 3(r
1
r
0
). Replace r
0
by q
3
, and obtain q
3
q
2
= r
1
q
3
, and hence
q
3
= (q
2
+ r
1
)/2.
c. Set q
0
= p
0
and r
3
= p
3
. Compute q
1
= (p
0
+ p
1
)/2 and r
2
= (p
2
+ p
3
)/2. Compute m = (p
1
+ p
2
)/2.
Compute q
2
= (q
1
+ m)/2 and r
1
= (m + r
2
)/2. Compute q
3
= (q
2
+ r
1
)/2 and set r
0
= q
3
.
16. A Bézier curve is completely determined by its four control points. Two are given directly: p
0
=
x(0) and p
3
= x(1). From equation (9), x!
!
(0) = 3(p
1
p
0
) and x!
!
(1) = 3(p
3
p
2
). Solving gives
p
1
= p
0
+
1
3
x!
!
(0) and p
2
= p
3
1
3
x!
!
(1)
17. a. The quadratic curve is w(t) = (1 t)
2
p
0
+ 2t(1 t)p
1
+ t
2
p
2
. From Example 1, the tangent
vectors at the endpoints are w!
(0) = 2p
1
2p
0
and w!
(1) = 2p
2
2p
1
. Denote the control
points of x(t) by r
0
, r
1
, r
2
, and r
3
. Then
r
0
= x(0) = w(0) = p
0
and r
3
= x(1) = w(1) = p
2
From equation (9) or Exercise 3(a) (using r
i
in place of p
i
) and Example 1,
–3r
0
+ 3r
1
= x!
(0) = w!
!
(0) = 2p
1
2p
0
so p
0
+ r
1
=
10
22
3
pp
and
480 CHAPTER 8 The Geometry of Vector Spaces
!
Copyright © 2012 Pearson Education, Inc. Publishing as Addison-Wesley.
r
1
=
01
2
3
+pp
(i)
Similarly, using the tangent data at t = 1, along with equation (9) and Example 1, yields
–3r
2
+ 3r
3
= x!
(1) = w!
!
(1) = 2p
2
2p
1
,
r
2
+ p
2
=
21
22
3
pp
, r
2
=
21
2
22
3
pp
p
, and
r
2
=
12
2
3
+pp
(ii)
b. Write the standard formula (7) in this section, with r
i
in place of p
i
for i = 1, …, 4, and then
replace r
0
and r
3
by p
0
and p
2
, respectively:
x(t) = (1 – 3t + 3t
2
t
3
)p
0
+ (3t – 6t
2
+ 3t
3
)r
1
+ (3t
2
– 3t
3
)r
2
+ t
3
p
2
(iii)
Use the formulas (i) and (ii) for r
1
and r
2
to examine the second and third terms in (iii):
23 23 23
12
101
33
23 2 3
01
(3 6 3 ) (3 6 3 ) (3 6 3 )
(2 ) (24 2)
tt t tt t tt t
ttt tt t
+=+++
=+++
rp p
pp
23 23 23
21
212
33
23 23
12
(3 3 ) (3 3 ) (3 3 )
(2 2 ) ( )
tt tt tt
tt tt
=+
=+
rpp
pp
When these two results are substituted in (iii), the coefficient of p
0
is
(1 3t + 3t
2
t
3
) + (t 2t
2
+ t
3
) = 1 2t + t
2
= (1 t)
2
The coefficient of p
1
is
(2t 4t
2
+ 2t
3
) + (2t
2
2t
3
) = 2t 2t
2
= 2t(1 t)
The coefficient of p
2
is (t
2
t
3
) + t
3
= t
2
. So x(t) = (1 t)
2
p
0
+ 2t(1 t)p
1
+ t
2
p
2
, which
shows that x(t) is the quadratic Bézier curve w(t).
18.
0
01
012
0123
33
363
33
ªº
«»
+
«»
«»
+
«»
++
«»
¬¼
p
pp
ppp
pppp

Navigation menu