We discussed administrative matters and introduced ourselves. Then we began trying to understand which functions satisfy the property f(x+y) = f(x) + f(y). Click below for a more detailed lecture summary.
Click here for Lecture 1 notes
We continued trying to `solve' the equation f(x+y) = f(x) + f(y) for the function f. We made some progress, in particular showing that f(a/b) = (a/b) f(1) for any fraction a/b. We also introduced some notation: for the real numbers, for the integers, symbols `for all' and `contained in', QED, etc. Click below for a more detailed lecture summary.
Click here for Lecture 2 notes
Today we discussed rational versus irrational numbers. In particular, we proved that the square-root of 2 must be irrational. Click below for a more detailed lecture summary.
Click here for Lecture 3 notes
Today we went back to discussing functions satisfying f(x+y) = f(x) + f(y). We discovered that if f is continuous, then our conjecture that f(x)=x f(1) for every real number x is true! But what if our function is not continuous? Do such f even exist? The answer to the latter question depends on whether or not you believe the Axiom of Choice. Click below for a more detailed summary.
Click here for Lecture 4 notes
Today we began exploring linear maps, the object of interest in linear algebra. We played around with two simple types of linear maps -- those from R to R and those from R2 to R -- and were able to completely characterize them.
Click here for Lecture 5 notes
Today we saw that our characterizations of linear maps from R to R and from R2 to R are really the same, once we use the language of dot product. We then explored various properties of dot products.
Click here for Lecture 6 notes
We saw two ways to approach a problem about linear maps from R2 to R. We then introduced linear maps from R2 to R2.
Click here for Lecture 7 notes
Today we explored the rotation map. We came up with a geometric explanation of why this map is linear. We also used polar coordinates to discover a formula for how the map acts on a given point (x,y).
Click here for Lecture 8 notes
We rephrased the action of a linear map from R2 to R2 as a action of a matrix on a point; this also inspired us to start writing points as columns rather than rows. We characterized a such linear maps, and described the corresponding matrices.
Click here for Lecture 9 notes
We looked at how different linear maps act geometrically on the plane. We also defined the notions of composition and image.
Click here for Lecture 10 notes
We proved that the image of a linear map is either the entire plane (in which case it's called nonsingular), or else is entirely contained inside of some line (in which case the map is called singular). We also discussed the notion of invertibility, and stated a theorem about the invertibility of nonsingular linear maps.
Click here for Lecture 11 notes
Today we proved that every nonsingular linear map is invertible, and asserted that its inverse is also linear. We also proved that the composition of any two linear maps is linear, and established a formula for the matrix of the composition of two maps in terms of the matrices of those two individually.
Click here for Lecture 12 notes
We've discussed how linear maps affect shapes. Today we looked at how they affect area. This led us to define the determinant of a map.
Click here for Lecture 13 notes
We first verified our conjecture from last time about the determinant of a composition. We then gave an application of this to prove that rectangles scale nicely under linear maps. We concluded by pointing out that linear maps, although defined on points, are in fact well-defined on vectors.
Click here for Lecture 14 notes
Today we continued discussing vectors, in particular proving that any linear map f from R2 to R2, when viewed as a function on the set V2 of all vectors in the plane, remains well-defined. We ended by hinting at change of basis.
Click here for Lecture 15 notes
After reviewing our work from last time, we resumed our exploration of change of basis. We realized that there was an implicit linear map at play, which we can think of as an English-Shibbolese dictionary. If this map is nonsingular, we proved that any vector can be expressed in Shibbolese.
Click here for Lecture 16 notes
We concluded our discussion of change-of-basis. We then turned to the Singular Value Decomposition. We considered it from two perspectives: one as a method of understanding what a map f does geometrically, the other as a way of thinking about f as mapping a rectangular lattice to another rectangular lattice.
Click here for Lecture 17-18 notes
We completed our discussion of the singular value decomposition, proving various unproved assertions from last time. We then began exploring powers of a certain matrix, which turned out to be related to the Fibonacci numbers.
Click here for Lecture 19-20 notes
We first discussed associativity, specifically of function composition. Next we proved (by induction) a formula for powers of a certain matrix in terms of Fibonacci numbers. Finally, we motivated and defined the notion of similarity, and outlined our strategy for finding a formula for Fibonacci numbers.
Click here for Lecture 21 notes
We found a formula for the n-th Fibonacci number using matrices. This inspired us to study the spectral decomposition, which in turn led us to explore eigenvalues and eigenvectors.
Click here for Lecture 22--24 notes
We introduced the notion of vector space, and explored a number of examples. The material is very similar to Chapter 1, Section 1 of the textbook. (The book, Linear Algebra Done Wrong by Sergei Treuil, is available for free (and legal!) pdf download.) Perhaps the biggest difference from the book was the addition of one special example: MSS3, the vector space of all 3x3 magic squares.
We warmed up by proving that 0v = 0 for any vector v in a vector space V. We then defined the notion of a basis of a vector space, and gave examples and nonexamples of bases. See section 2 of the textbook.
We warmed up by proving that a vector space has a unique additive identity (which we call 0). We then defined the concepts of spanning set, linear dependence, and linear independence. Finally, we proved that a set of vectors is linearly independent if and only if none of the vectors can be written as a linear combination of the others. See section 2 of the textbook.
We proved what I called the Fundamental Property of Bases (Treuil calls it by the less romantic name Proposition 2.7). Then we proved that any two bases of a vector space V have the same number of elements. (We did this by using the Steinitz Exchange Trick.) This allowed us to define the dimension of a vector space: it's the number of vectors in a basis. (See the notes posted below for the proof that every basis has the same number of elements.)
Click here for some notes on lecture 28
We discussed how to use the fundamental property of bases to determine whether or not a given set of vectors is a basis. In particular, we proved that {(1,1),(1,-1)} is a basis of R2, while {(1,2,-1),(2,-1,1),(8,1,1)} is not a basis of R3. We then proved that any spanning set contains a basis (Proposition 2.8 in Chapter 1 of the textbook).
We began by defining what it means for a vector space to be finite-dimensional: that there exists a finite spanning set. This definition immediately allowed us to prove that any finite-dimensional vector space has a basis. Next, we sketched a proof that any linearly independent set of vectors in a finite-dimensional space is contained in a basis of that space; you will write out this proof rigorously in this week's Problem Set. Finally, we defined the notion of a linear map from one vector space to another (see Chapter 1, Section 3 of the text; note that Treil calls this a linear transformation). We concluded by considering four examples of linear maps. The first was the function f : R3 → R2 defined by f(x,y,z) = (x,y). The second was g : R2 → R3 defined by g(x,y) = (x,y,2x+y). The third example was the differential operator: d/dx : Pn → Pn-1. The final example we considered was h : MSS3 → R which sends a magic square to its magic sum.
Today we discussed the concepts of invertibility and isomorphisms. Since the definition I gave is different from that in the book, I've written up a lecture summary (click below).
Click here for notes on lecture 31
Today we introduced notions which will help us measure how far a linear map is from being an isomorphism: these are the image and kernel of the map. Along the way we defined the notion of subspace, and stated the Rank-Nullity Theorem.
Click here for notes on lecture 32
We started by proving the Rank-Nullity Theorem. Then we explored some of its consequences.
Click here for notes on lecture 33
We discussed how to represent an abstract linear map as a matrix. Among other things, we represented the differential operator as a matrix.
Click here for notes on lecture 34
First, we discussed how to view abstract vector spaces (and associated linear maps) in a more concrete way. Next, we talked about the determinant of a linear map; we went over two methods (the notes also contain a description of another method due to Lewis Carroll, but that's just for fun -- you won't be tested on that, of course). Our second approach led to the concept of triangular matrices, and also to a method for finding the inverse of a given matrix. Finally, we briefly touched on the singular value decomposition and the spectral decomposition.