NOTES ON LINEAR ALGEBRA

 

CONTENTS:

[10] BASIS VECTORS - PART II

[11] LINEAR TRANSFORMATIONS

 

 

[10] BASIS VECTORS - PART II

We’ll now give a procedure to determine when two vectors W1 and W2 are a basis for the plane. Not only will our method say if they’re a basis, but it will also tell us how to find a and b.

 

The Equation we’re trying to solve is:

 

Let                                           (R)                                           (U)                  

                        W1      =          (S)                   W2      =          (V)

 

 

Find a, b so that           

 

                        (x)                    (R)                   (U)

                        (y)        =       a (S)       +      b  (V)

                                               

Then

 

                        (x)                    (aR)                 (bU)

(y)        =          (aS)     +          (bV)

 

 

                        (x)                    (Ra)                 (Ub)

(y)        =          (Sa)     +          (Vb)

 

 

                        (x)                    (Ra + Ub)

(y)        =          (Sa  + Vb)

 

 

                        (x)                    (R U) (a)

(y)        =          (S V) (b)

 

 

 

 

But this equation is just begging us to use Gaussian Elimination. We need to find a number m such that, if we multiply the first row by m and add it to the second, we get the new row will be (0 something).

 

So           Rm + S = 0

 

hence               m =  -S / R

 

 

 

So, we carry out the Gaussian Elimination. It can be shown that if the two vectors (R,S) and (U,V) do not lie on the same line, then Gaussian Elimination will never yield the last row all zero.

 

Let’s do some examples:

 

FIRST EXAMPLE

 

                                    (2)                                            (3)

            W1      =          (4)                    W2      =          (7)

 

 

Then we must solve

 

                        (x)                    (2)                    (3)

                        (y)        =       a(4)        +      b  (7)

                                               

Then

 

                        (x)                    (2a)                  (3b)

(y)        =          (4a)      +          (7b)

           

 

Hence

                        (x)                    (2 3) (a)

(y)        =          (4 7) (b)

           

 

 

NOW WE DO GAUSSIAN ELIMINATION:

 

So, what should we multiply the first row by? We need 2m + 4 = 0, so m = -4/2 = -2.

 

Hence we get

 

                        (x)                    (2 3) (a)

(y-2x)  =          (0 1) (b)

           

Or        2a + 3b = x;     0a + 1b = y

 

Therefore         b = y.

                        2a + 3b = x      à        a = (x-3b)/2 = (x-3y)/2

 

So, given a vector (x,y) we can find a,b such that (x,y) = aW1 + bW2. So W1, W2 is a basis!

 

 

SECOND EXAMPLE

 

                                    (2)                                            (4)

            W1      =          (4)                    W2      =          (8)

 

 

Then we must solve

 

                        (x)                    (2)                    (4)

                        (y)        =       a(4)        +      b  (8)

                                               

Then

 

                        (x)                    (2a)                  (4b)

(y)        =          (4a)      +          (8b)

           

 

Hence

                        (x)                    (2 4) (a)

(y)        =          (4 8) (b)

           

NOW WE DO GAUSSIAN ELIMINATION:

 

So, what should we multiply the first row by? We need 2m + 4 = 0, so m = -4/2 = -2.

 

Hence we get

 

                        (x)                    (2 4) (a)

(y-2x)  =          (0 0) (b)

 

So, the two equations we must solve are:

 

            2a + 4b = x                  and                   0a + 0b = y - 2x

 

Now, regardless of what a and b are, 0a + 0b is always zero. If y - 2x is not zero, it will be impossible to solve the second equation. So, can we find x and y such that y - 2x ¹ 0? Sure. Take x = 0, y non-zero. Or take x nonzero, y = 0. Or take y = 22, x = 12. Almost any choice works.

 

So we see that W1, W2 are not a basis. We ended up with a row of zeros. Let’s look at our two vectors again:

 

                                    (2)                                            (4)

            W1      =          (4)                    W2      =          (8)

 

Notice that

                                    (2*2)                (2)

            W2      =          (2*4)      =     2(4)          =        2W1

 

Not only are W1 and  W2 not a basis, but they lie on the same line!

 

 

THIRD EXAMPLE

 

                                    (1)                                            (2)                                            (0)

            W1      =          (2)                    W2      =          (2)                    W3      =          (1)

(1)                                            (2)                                            (1)

 

 

Are W1, W2, W3 a basis?

 

 

(x)                    (1)                    (2)                    (0)

(y)        =     a   (2)        +      b  (2)        +       c(1)

(z)                    (1)                    (2)                    (1)

 

 

            (x)                    (1a)                  (2b)                  (0c)

            (y)        =          (2a)      +          (2b)      +          (1c)

            (z)                    (1a)                  (2b)                  (1c)

 

 

            (x)                    (1a + 2b + 0c)

            (y)        =          (2a + 2b + 1c)

            (z)                    (1a + 2b + 1c)

 

            (x)                    (1 2 0) (a)

            (y)        =          (2 2 1) (b)

            (z)                    (1 2 1) (c)

 

 

 

Now we do Gaussian Elimination! We multiply the first row by -2 and add it to the second row. (1m + 2 = 0, m = -2).

 

            (x)                    (1 2  0) (a)

            (y-2x)  =          (0 -2 1) (b)

            (z)                    (1 2  1) (c)

 

Now we multiply the first row by -1 and add it to the third row (1m + 1 = 0, m = -1).

 

            (x)                    (1 2  0) (a)

            (y-2x)  =          (0 -2 1) (b)

            (z-x)                 (0 0  1) (c)

 

We don’t have to do any more work, as this matrix is UPPER TRIANGULAR. This means the matrix is all zeros below the main diagonal. We can now solve the three equations, one at a time.

 

            0a + 0b + 1c = z - x     à c = z - x

           

            0a - 2b + 1c  = y - 2x  à b = (y-2x - z + x) / -2

 

            1a + 2b + 0c = x          à a = y - 2x - z + x

 

So these three vectors are a basis.

 

 

In general, to determine if something is a basis for 3space:

 

           

                                    (L)                                           (P)                                           (S)

            W1      =          (M)                  W2      =          (Q)                   W3      =          (T)

(N)                                           (R)                                           (U)

 

 

Are W1, W2, W3 a basis?

 

(x)                    (L)                   (P)                   (S)

(y)        =     a   (M)      +      b  (Q)       +       c(T)

(z)                    (N)                   (R)                   (U)

 

 

            (x)                    (L a)                (P b)                (S c)

            (y)        =          (M a)   +          (Q b)    +          (T c)

            (z)                    (N a)                (R b)                (U c)

 

 

            (x)                    (L a + P b + S c)

            (y)        =          (M a +Q b + Tc)

            (z)                    (N a + R b + Uc)

 

            (x)                    (L  P  S) (a)

            (y)        =          (M Q T) (b)

            (z)                    (N  R U) (c)

 

 

The reason for all the colour is to (hopefully) show how things are going. To determine if W1, W2, W3 are a basis, we are led to solving a matrix equation. The first column of our matrix is W1, the second column is W2, the third column is W3. Call this matrix W. We then have

 

            (x)

            (y)        =          a W1  + b W2 + c W3

            (z)

 

            (x)                    (                         ) (a)

            (y)        =          ( W1   W2   W3)(b)

            (z)                    (                         ) (c)

 

 

 

 

 

[11] LINEAR TRANSFORMATIONS

Linear Transformations are very useful in mathematics. The reason is they allow us to understand functions at complicated values by understanding them at simpler values. First, the definition for functions, then we’ll generalize to matrices:

 

We say a function is a linear function if two conditions hold:

(1)  f(x + y) = f(x) + f(y) for all x,y

(2)  f(ax) = af(x)

 

Now, it is very unusual for a function to be linear. Take f(w) = Sin[w].

 

Then f(x) = Sin[ax], which usually is not a Sin[x]. For example, if x = 180, then a Sin[x] is always zero. But if a = ½, Sin[a x] = Sin[90] = 1.

 

Let’s try f(w) = w2. Does f(ax) = af(x)?

 

Well, f(ax) = (ax)2 = a2 x2 = a2 f(x) ¹  a f(x) unless a = 1 or 0.

Also, f(x+y) = (x+y)2 = x2 + 2xy + y2 = f(x) + 2xy + f(y) ¹ f(x) + f(y) unless x or y = 0.

 

How about f(w) = 3w + 1?

Well, f(ax) = 3(ax) + 1 =          a(3x) + 1

                                                =          a(3x + 1 - 1) + 1

                                                =          a(f(x) - 1) + 1

                                                =          a f(x) - a + 1

¹          a f(x) unless a = 1

 

 

Just in case you’re wondering if any function is linear, here’s one that is:

 

            f(w)      =          3w

 

Then     f(ax)   =   3(ax)   =   a(3x)   =  a f(x)

 

            f(x+y)  =  3(x+y)  =  3x + 3y = f(x) + f(y)

 

[NOTE: one can prove that the only linear functions are f(x) = cx, where c is any real or complex number].

 

 

We now generalize this to higher dimensions. Why do we care about higher dimensions? Well, matrices act on vectors (you’ve seen this in your force / stress diagrams) and it turns out that matrices will be linear transformations.

 

Let  V and W be any two vectors with the same number of components, and let e be a real number. Then any matrix (that is the correct size to act on V and W) is a linear transformation, namely,

 

            (1)        A (V + W)       =   A V + A W

            (2)        A(c V)             = c A V           

 

 

I’ll sketch the proof for the 2x2 case:

 

                                    (v1)                                          (w1)

Let       V         =          (v2)                  W        =          (w2)

 

                                    (a b)

Let the matrix  A   =     (c d)

 

 

Then

           

                                    (a b) (    (v1)        (w1)    )

A (V + W)       =          (c d) (    (v2)   +   (w2)    )

 

                                    (a b) ( v1 + w1)

                        =          (c d) ( v2 + w2)

 

                                    ( a(v1 + w1)  +  b(v2 + w2) )

                        =          ( c(v1 + w1)  +  d(v2 + w2) )

 

                                    ( av1 + bv2      +    aw1 + bw2 )         

                        =          ( cv1 + dv2      +    cw1 + dw2 )

 

                                    ( av1 + bv2 )       ( aw1 + bw2 )         

                        =          ( cv1 + dv2 )    +  ( cw1 + dw2 )

 

                                    (a b) (v1)             (a b) (w1)

                        =          (c d) (v2)         +  (c d) (w2)

 

                        =          A V + A W

                       

 

The other condition is even easier to check:

 

 

                                    (a b) (    (v1)  )

A (eV)             =          (c d) ( e (v2)   )

 

                                    (a b) (e v1)

                        =          (c d) (e v2)

 

                                    (ae v1 + be v2)

                        =          (ce v1 + de v2)

 

                       

                                    (a v1 + b v2)

                        =      e  (c v1 + d v2)

 

                                    (a b) (v1)

                        =      e  (c d) (v2)

 

                        =      e A V

 

 

A similar proof works for any size matrix, concluding the proof.

 

COMING ATTRACTIONS:

WHY DO WE CARE ABOUT BASES? WHY DO WE CARE ABOUT LINEAR TRANSFORMATIONS? WHAT’S THE CONNECTION BETWEEN THE TWO?

 

Eventually, we’ll see that certain matrices have natural ‘bases’ attached to them. They (and powers of them) may look very ugly as given. But if we changes bases, using something else other than the x-axis and the y-axis, we can often make the matrices look good.

 

For symmetric matrices, this will be the case. In fact, the Principle Axis Theorem says we will be able to find a basis where, if we write our matrix relative to that basis, it will be diagonal!

 

Also, let’s say W1 and W2 are a basis. Then we can write any vector V = (x,y) in terms of the two, or

 

                                                V = a W1 + b W2

 

Then if A is a matrix, we have

 

                                                A V = A ( aW1 + bW2 )

 

                                                A V = A (aW1) + A (bW2)

 

                                                A V = a (A W1) + b (A W2)

 

Or, more generally,                   AN V = a (AN W1) + b (AN W2)

 

 

Real Symmetric Matrices have what is called a ‘basis of eigenvectors’. That means there are real numbers c1 and c2 such that

 

            A W1 = c1 W1             A W2 = c2 W2

 

Applying A multiple times yields

 

            AN W1 = c1N W1         AN W2 = c2N W2

 

Hence

 

AN V    = a (AN W1) + b (AN W2)

 

                        = a c1N W1       +   b c2N W2    (Eq 11.1)

 

 

So here’s the advantage: Let’s say N is real big, say a billion. If we were to calculate AN V we would have to multiply A by itself one billion times, and then have that act on V. That’s a lot of calculation to do.

 

But, if our matrix is symmetric, we’ll be able to find W1 and W2 (two calculations), c1 and c2 (two more calculations) and numbers a and b (two more calculations), and then we just take N = one billion in (Eq 11.1), and we’re done!

 

See how much we saved!