## Lecture 1 (Feb 5)

We discussed administrative matters and introduced ourselves. Then we began trying to understand which functions satisfy the property f(x+y) = f(x) + f(y). Click below for a more detailed lecture summary.

## Lecture 2 (Feb 8)

We continued trying to `solve' the equation f(x+y) = f(x) + f(y) for the function f. We made some progress, in particular showing that f(a/b) = (a/b) f(1) for any fraction a/b. We also introduced some notation: for the real numbers, for the integers, symbols `for all' and `contained in', QED, etc. Click below for a more detailed lecture summary.

## Lecture 3 (Feb 10)

Today we discussed rational versus irrational numbers. In particular, we proved that the square-root of 2 must be irrational. Click below for a more detailed lecture summary.

## Lecture 4 (Feb 12)

Today we went back to discussing functions satisfying f(x+y) = f(x) + f(y). We discovered that if f is continuous, then our conjecture that f(x)=x f(1) for every real number x is true! But what if our function is not continuous? Do such f even exist? The answer to the latter question depends on whether or not you believe the Axiom of Choice. Click below for a more detailed summary.

## Lecture 5 (Feb 15)

Today we began exploring linear maps, the object of interest in linear algebra. We played around with two simple types of linear maps -- those from R to R and those from R2 to R -- and were able to completely characterize them.

## Lecture 6 (Feb 17)

Today we saw that our characterizations of linear maps from R to R and from R2 to R are really the same, once we use the language of dot product. We then explored various properties of dot products.

## Lecture 7 (Feb 22)

We saw two ways to approach a problem about linear maps from R2 to R. We then introduced linear maps from R2 to R2.

## Lecture 8 (Feb 24)

Today we explored the rotation map. We came up with a geometric explanation of why this map is linear. We also used polar coordinates to discover a formula for how the map acts on a given point (x,y).

## Lecture 9 (Feb 26)

We rephrased the action of a linear map from R2 to R2 as a action of a matrix on a point; this also inspired us to start writing points as columns rather than rows. We characterized a such linear maps, and described the corresponding matrices.

## Lecture 10 (Feb 29)

We looked at how different linear maps act geometrically on the plane. We also defined the notions of composition and image.

## Lecture 11 (Mar 2)

We proved that the image of a linear map is either the entire plane (in which case it's called nonsingular), or else is entirely contained inside of some line (in which case the map is called singular). We also discussed the notion of invertibility, and stated a theorem about the invertibility of nonsingular linear maps.

## Lecture 12 (Mar 4)

Today we proved that every nonsingular linear map is invertible, and asserted that its inverse is also linear. We also proved that the composition of any two linear maps is linear, and established a formula for the matrix of the composition of two maps in terms of the matrices of those two individually.

## Lecture 13 (Mar 7)

We've discussed how linear maps affect shapes. Today we looked at how they affect area. This led us to define the determinant of a map.

## Lecture 14 (Mar 9)

We first verified our conjecture from last time about the determinant of a composition. We then gave an application of this to prove that rectangles scale nicely under linear maps. We concluded by pointing out that linear maps, although defined on points, are in fact well-defined on vectors.

## Lecture 15 (Mar 11)

Today we continued discussing vectors, in particular proving that any linear map f from R2 to R2, when viewed as a function on the set V2 of all vectors in the plane, remains well-defined. We ended by hinting at change of basis.

## Lecture 16 (Mar 14)

After reviewing our work from last time, we resumed our exploration of change of basis. We realized that there was an implicit linear map at play, which we can think of as an English-Shibbolese dictionary. If this map is nonsingular, we proved that any vector can be expressed in Shibbolese.

## Lecture 17-18 (Mar 16 & 18)

We concluded our discussion of change-of-basis. We then turned to the Singular Value Decomposition. We considered it from two perspectives: one as a method of understanding what a map f does geometrically, the other as a way of thinking about f as mapping a rectangular lattice to another rectangular lattice.

## Lecture 19-20 (Apr 4 & 6)

We completed our discussion of the singular value decomposition, proving various unproved assertions from last time. We then began exploring powers of a certain matrix, which turned out to be related to the Fibonacci numbers.

## Lecture 21 (Apr 8)

We first discussed associativity, specifically of function composition. Next we proved (by induction) a formula for powers of a certain matrix in terms of Fibonacci numbers. Finally, we motivated and defined the notion of similarity, and outlined our strategy for finding a formula for Fibonacci numbers.

## Lectures 22--24 (Apr 11,13,15)

We found a formula for the n-th Fibonacci number using matrices. This inspired us to study the spectral decomposition, which in turn led us to explore eigenvalues and eigenvectors.

## Lecture 25 (Apr 18)

We introduced the notion of vector space, and explored a number of examples. The material is very similar to Chapter 1, Section 1 of the textbook. (The book, Linear Algebra Done Wrong by Sergei Treuil, is available for free (and legal!) pdf download.) Perhaps the biggest difference from the book was the addition of one special example: MSS3, the vector space of all 3x3 magic squares.

## Lecture 26 (Apr 20)

We warmed up by proving that 0v = 0 for any vector v in a vector space V. We then defined the notion of a basis of a vector space, and gave examples and nonexamples of bases. See section 2 of the textbook.

## Lecture 27 (Apr 22)

We warmed up by proving that a vector space has a unique additive identity (which we call 0). We then defined the concepts of spanning set, linear dependence, and linear independence. Finally, we proved that a set of vectors is linearly independent if and only if none of the vectors can be written as a linear combination of the others. See section 2 of the textbook.

## Lecture 28 (Apr 25)

We proved what I called the Fundamental Property of Bases (Treuil calls it by the less romantic name Proposition 2.7). Then we proved that any two bases of a vector space V have the same number of elements. (We did this by using the Steinitz Exchange Trick.) This allowed us to define the dimension of a vector space: it's the number of vectors in a basis. (See the notes posted below for the proof that every basis has the same number of elements.)

## Lecture 29 (Apr 27)

We discussed how to use the fundamental property of bases to determine whether or not a given set of vectors is a basis. In particular, we proved that {(1,1),(1,-1)} is a basis of R2, while {(1,2,-1),(2,-1,1),(8,1,1)} is not a basis of R3. We then proved that any spanning set contains a basis (Proposition 2.8 in Chapter 1 of the textbook).

## Lecture 30 (Apr 29)

We began by defining what it means for a vector space to be finite-dimensional: that there exists a finite spanning set. This definition immediately allowed us to prove that any finite-dimensional vector space has a basis. Next, we sketched a proof that any linearly independent set of vectors in a finite-dimensional space is contained in a basis of that space; you will write out this proof rigorously in this week's Problem Set. Finally, we defined the notion of a linear map from one vector space to another (see Chapter 1, Section 3 of the text; note that Treil calls this a linear transformation). We concluded by considering four examples of linear maps. The first was the function f : R3R2 defined by f(x,y,z) = (x,y). The second was g : R2R3 defined by g(x,y) = (x,y,2x+y). The third example was the differential operator: d/dx : PnPn-1. The final example we considered was h : MSS3R which sends a magic square to its magic sum.

## Lecture 31 (May 2)

Today we discussed the concepts of invertibility and isomorphisms. Since the definition I gave is different from that in the book, I've written up a lecture summary (click below).

## Lecture 32 (May 4)

Today we introduced notions which will help us measure how far a linear map is from being an isomorphism: these are the image and kernel of the map. Along the way we defined the notion of subspace, and stated the Rank-Nullity Theorem.

## Lecture 33 (May 6)

We started by proving the Rank-Nullity Theorem. Then we explored some of its consequences.

## Lecture 34 (May 9)

We discussed how to represent an abstract linear map as a matrix. Among other things, we represented the differential operator as a matrix.

## Lectures 35-36 (May 11 & 13)

First, we discussed how to view abstract vector spaces (and associated linear maps) in a more concrete way. Next, we talked about the determinant of a linear map; we went over two methods (the notes also contain a description of another method due to Lewis Carroll, but that's just for fun -- you won't be tested on that, of course). Our second approach led to the concept of triangular matrices, and also to a method for finding the inverse of a given matrix. Finally, we briefly touched on the singular value decomposition and the spectral decomposition.