NOTES ON LINEAR ALGEBRA

 

CONTENTS:

[15] COMPLEX NUMBERS

[16] FINDING EIGENVALUES

 

 

[15] COMPLEX NUMBERS

 

While we’ve seen in previous sections how useful eigenvalues and eigenvectors can be, we haven’t yet seen how to find them! If it’s a very complicated process, then the benefits they provide could be canceled by the work needed to find them. Fortunately, all one needs to do is solve a polynomial and perform Gaussian Elimination.

 

Somehow, to each square matrix we’ll attach a polynomial in one variable, whose degree is the number of columns (or equivalently, the number of rows). So to find the eigenvalues of a 2x2 matrix just requires us to solve a quadratic equation, which is trivial by the quadratic formula.

 

Unfortunately, a polynomial with real coefficients does not necessarily have real roots. For example, x2 + 1 = 0 has two roots, x = i and x = -i, where as always i = Sqrt[-1].

 

Reminder: below is a list of types of numbers. Each one is a subset of the next:

 

          [1] Integers: ..., -2, -1, 0, 1, 2, ...

          [2] Rationals: p/q, where p, q are integers and q ¹ 0

          [3] Reals: think any terminating or infinite decimal

          [4] Complex: of the form a + bi where a and b are real numbers

 

So, even if we want to study ONLY matrices with real coefficients, we may need to introduce complex numbers to find their eigenvalues. For example, consider the following matrix

 

 

                   (0 -1)

R       =       (1  0)

 

We’ll see later that this has eigenvalues ±i.

 

However, all is not lost. We have several theorems that will help us in our study:

 

 

FUNDAMENTAL THEOREM OF ALGEBRA:

Consider a polynomial of one variable, of degree n. Then there are n roots (not necessarily distinct).

 

 

THEOREM OF COMPLEX CONJUGATION:

Let f(x) be a polynomial with real coefficients. Then, if z is a root of f(x) (ie, f(z) = 0), then so is the complex conjugate of z.

 

 

EIGENVALUES OF SYMMETRIC MATRICES:

The eigenvalues of symmetric matrices are real if all the entries of the symmetric matrix are real.

 

 

Basic properties of complex numbers:

 

ADDITION:

    2 + 3i                  11 - 7i

          +  4 - 5i                 + 8 + 8i

          -----------                -----------

              6 - 2i                   19 + i

 

 

 

 

 

 

 

 

MULTIPLICATION:

 

Recall (a+b)(c+d) = ac + ad + bc + bd. This is how you multiply complex numbers, but  you must remember that i2 = -1.

 

For example:

 

          (2 - 3i)(5 + 2i)        =       2*5 + 2*2i - 3i*5 - 3i*2i

                                      =       10 + 4i - 15i - 6i2   

                                      =       10 - 11i + 6

                                      =       16 - 11i

 

 

GRAPHICAL REPRESENTATION:

 

                                                                2+4i

 

 

 


                        

                2-2i        

 


                                                                         3+i

 

 


                        -2-2i

 

 

 

 

 


COMPLEX CONJUGATION:

                               

If z = x + iy, then  z  = x - iy. We read this the complex conjgate of z.

So 3 - 2i goes to 3 + 2i.   -5 - 7i goes to -5 + 7i. -11 goes to -11, 76i goes to -76i. Remember any real number x can be written x + oi. Any number of the form 0 + iy is said to be purely imaginary.

 

[16a] FINDING EIGENVECTORS (First Version)

 

A vector has two parts: (1) a direction; (2) a magnitude. Let A be a matrix, and v a vector. Then Av is a new vector. In general, Av and v will be in different directions. However, sometimes one can find a special vector (or vectors) where Av and v have the same direction. In this case we say v is an eigenvector of A. For shorthand, we often drop the ‘of A’ and say v is an eigenvector.

 

However, in general v will not equal Av – they may be in the same direction, but they’ll differ in magnitude. For example, Av may be twice as long as v, or Av = 2v. Or maybe it’s three times, giving Av = 3v. Or maybe it’s half as long, and pointing in the opposite direction: Av =  -½ v.

 

In general, we write for v an eigenvector of A:

 

          Av = l v, where l is called the eigenvalue.

 

One caveat: for any matrix A, the zero vector 0 satisfies A 0 = 0. But it also satisfies A 0 = 2 0, A 0 = 3 0, .... The zero vector would always be an eigenvector, and any real number would be its eigenvalue. Later you’ll see it’s useful to have just one eigenvalue for each eigenvector; moreover, you’ll also see non-zero eigenvectors encode lots of information about the matrix. Hence we make a definition and require an eigenvector to be non-zero.

 

The whole point of eigenvectors / eigenvalues is that instead of studying the action of our matrix A on every possible direction, if we can just understand it in a few special ones, we’ll understand A completely. Studying the effect of A on the zero vector provides no new information, as EVERY matrix A acting on the zero vector yields the zero vector.

 

Note: what is an eigenvector for one matrix may not be an eigenvector for another matrix. For example:

 

(1 2) (1)  =   (3)  = 3 (1)

(2 1) (1)       (3)         (1)

 

so here (1,1) is an eigenvector with eigenvalue 3.

 

(1 1) (1)       =       (2)

(2 2) (1)                 (4)

 

and as (2,4) is not a multiple of (1,1), we see (1,1) is not an eigenvector.

 

 

Let’s find a method to determine what the eigenvector is, given an eigenvalue.

 

So, we are given as input a matrix A and an eigenvalue l, and we are trying to find a non-zero vector v such that Av = l v.

 

Remember, if I is the identity matrix, Iv = v for any vector v. This is the matrix equivalent of multiplying by 1.

 

Av = l v      in algebra, we put all the unknowns on one side.

                   So here we subtract lv from both sides. I’m going to

                   write 0 for the zero vector, but remember, it is not

                   just a number, but a vector where every component is zero.

 

Av - l v = 0

 

Av - lIv = 0 remember, you’re prof is from an Iv-y school: put in the ‘I’

 

(A - lI) v = 0

 

See, lI is a matrix, A is a matrix, so the above is legal. We NEED to put in the Identity matrix. Otherwise, if we went from Av - lv to (A - l)v we’d be in trouble. There, A is a matrix (say 2x2), but l is a number. And we don’t know how to subtract a number from a matrix. We do, however, know how to subtract two matrices. Hence we can calculate what A - lI is, and then do Gaussian Elimination.

 

Let’s do an example:

 

Say A is

 

(4 3)

(2 5)

 

and say l = 2. We now try to find the eigenvector v.

 

Av = 2v

Av - 2v = 0

Av - 2Iv = 0

(A - 2I)v = 0

 

Let’s determine the matrix A - 2I

 

I is     (1 0)      so lI is     (2 0)

          (0 1)                      (0 2)

 

Hence

 

A - lI =       (4 3)   -        (2 0)   =      (2  3)

                   (2 5)            (0 2)            (2  3)

 

So we are doing Gaussian elimination on the above matrix. Let’s write v = (x,y). Then we must solve:

 

(2 3) (x)       =       (0)

(2 3) (y)                 (0)

 

So, we multiply the first row by -1 and add it to the second, getting

 

(2 3) (x)       =       (0)

(0 0) (y)                 (0)

 

The second equation, 0x + 0y = 0, is satisfied for all x and y. The first equation, 2x + 3y = 0, says y = - 2/3  x. So we see that

 

v   =   (x)     =       (    x    )

          (y)               (-2/3 x)

 

Now x is arbitrary, as long as v is not the zero vector. There’s many different choices we can make. We can take x = 1 and get the vector

(1, -2/3). We can take x = 3, and get the vector v = (3,-2). Notice, however, that the second choice is in the same direction as the first, just a different magnitude.

 

This reflects the fact that if v is an eigenvector of A, then so is any multiply of v. Moreover, it has the same eigenvalue. Here’s the proof:

 

Say Av = lv. Consider the vector w = mv, where m is any number.

 

Then Aw    =       A(mv)

                   =       m(Av)

                   =       m(lv)

                   =       l(mv)

                   =       lw

 

Hence the different choices of x just correspond to taking different lenghts of the eigenvector. Usual choices include x = 1, x = whatever is needed to clear all denominators, and x = whatever is needed to make the vector have length one.

 

 

 

[16b] FINDING EIGENVALUES (Second Version)

 

We now (finally!) shall see how to find the eigenvalues for a given matrix. Let’s look at a SQUARE matrix A, and see what numbers can be eigenvalues, and what numbers can’t. Let I be the corresponding identity matrix. So, if A is 2x2, I is 2x2; if A is 3x3, I is 3x3, etc.

 

If l is an eigenvalue of A, that means there is a non-zero vector v such that

 

          A v    =       l v

 

But Iv = v (as I is the Identity matrix) so

 

          A v              =       l I v

          A v - l I v   =       O where O is the zero vector.

          (A - lI) v     =       O

 

Now, A - lI is a new matrix. Let’s call it Bl. Remember how we subtract matrices:

 

          (a b)                      (e f)                       (a-e   b-f)

          (c d)            -        (g h)            =       (c-g   h-d)   

 

So, we are trying to find pairs l and v (v non-zero) such that

 

          Bl v = O

 

Assume the matrix Bl is invertible. Then we can multiply both sides by Bl-1 and we get

 

          Bl-1 Bl v = Bl-1 O

 

But any matrix acting on the zero vector is the zero vector. Hence the Right Hand Side is just O. On the left, Bl-1 Bl = I, the Identity matrix. So the Left Hand Side is just v.

 

Hence, if Bl is invertible, we get  v = O. But v must not be the zero vector!

 

So we have found a necessary condition:

 

Given a square matrix A, l is not an eigenvalue of A if A - lI is invertible. Hence the only candidates are those l such that A - lI is not invertible.

 

 

 

 

 

 

 

 

It can actually be shown that this necessary condition is in fact sufficient, namely, if A - lI is not invertible, then l is an eigenvalue and there is an eigenvector v. Unfortunately, even if the matrix A has all real entries, it’s possible that its eigenvector could have complex entries, so we will not give a proof now.

 

Hence we need an easy way to tell when a matrix is invertible, and when it isn’t. It turns out that if A is a square matrix (remember, only square matrices are invertible), then A is invertible if and only if Determinant(A) is non-zero. We’ll talk more about this later, for now, you may trust Fine Hall.

 

 

Given a square matrix A, l is an eigenvalue of A if and only if Determinant(A - lI) ¹ 0.

 

 

 

 

 

 

 

Let’s now do an example. Consider

 

                   (3 2)

A       =       (4 1)

 

Now

                   (l 0)

lI      =       (0 l)

 

and

 

                             (3-l   2    )

A - lI          =       (4       1-l)

 

 

Determinant(A-lI) =       (3-l)(1-l) - (2)(4)

                                      =       3 - l - 3l + l2 - 8

                                      =       l2 - 4l - 5

                                      =       (l - 5)(l + 1)

 

                                      So l = 5 or l = -1, agreeing with the Homework

 

 

Let’s do one more example:

 

                   (2 6)

A       =       (4 4)

 

Now

                   (l 0)

lI      =       (0 l)

 

and

 

                             (2-l   6    )

A - lI          =       (4       4-l)

 

 

Determinant(A-lI)           =       (2-l)(4-l) - (6)(4)

                                      =       8 - 4l - 2l + l2 - 24

                                      =       l2 - 6l - 16

                                      =       (l - 8)(l + 2)

                                     

                                      So l = 8 or l = -2.