Today we discussed the simplest nontrivial case of measuring the size of something: the length of a subset of R. Some subsets, like closed or open intervals, are easy to assign length to. Even when we removed a few points from an interval it was easy to assign a length. But then we turned to a harder example: what's the length of the set of all rationals between 0 and 1? Our initial intuition was that it should be 0, because there are far fewer rationals than irrationals. On the other hand, rationals are dense in R, so they still seem to occupy a large proportion of the unit interval. After some playing around, we hit on the idea of covering this set by a bunch of intervals, thereby obtaining an upper bound on the length of the set. Thinking about it more, we realized that by choosing our intervals appropriately, we could bound the length of rationals between 0 and 1 by an arbitrarily small positive number; it follows that the length of the rationals between 0 and 1 must be 0. Thinking about this more led us to conjecture a nice definition for how to measure the length of an arbitrary subset of R: consider all possible coverings of the set by open intervals, and take the infimum of the lengths of all such coverings. We ended class by backing up a bit and asking what properties, at the bare minimum, we'd like an arbitrary notion of length to satisfy. We'll pick up with this topic next class.
Click here for video recording Click here for pdf of lecture notes
After discussing some administrative matters about the class and using Zoom, we started creating a list of properties that any reasonable notion of length should satisfy. There were many such properties! However, taking a cue from an observation by Alicia, we realized that many of these properties are consequences of one another, and thus don't need to be listed among our set of axioms defining length. Eventually we settled on a fairly short list: length should be a function ℓ : ℘(ℝ) → [0,∞] such that (1) ℓ(A∪B) = ℓ(A)+ℓ(B) for any disjoint sets A and B, and (2) ℓ([a,b]) = b-a whenever b≥a. We then tried to make rigorous our attempted proof from last class that ℓ(ℚ∩[0,1])=0, using just these properties. We quickly saw that one result we needed for the proof to go through was a countable subadditivity property: for any countable collection of sets Ak, the length of the union of all these should be bounded above by the sum of the lengths of all these. It wasn't hard to see how to prove a version of this for finitely many sets Ak, but for infinitely many it was less clear. By writing the infinite sum and infinite union in terms of limits of finite quantities, however, we were able to see that countable subadditivity is a consequence of a natural conjecture: that opertions of taking length and taking the limit commute. We'll pick up from here next class.
Click here for video recording Click here for pdf of lecture notes
Last class we tried to make rigorous our proof that the length (aka measure) of
ℚ∩[0,1] was 0, and realized that the main obstacle was proving
the countable subadditivity conjecture: the measure of the union of a
countable collection of sets is bounded above by the sum of the measures of these
sets. (Recall that we could prove a finite analogue of this.) From our discussion
one might be led to believe that the set being countable is what makes it have
measure 0, but this is not the case: today we proved that the Cantor set, which
is uncountable, also has measure 0. Next, we recalled that to prove the countable
subadditivity conjecture it suffices to prove that measure and limits commute;
more precisely, if we could prove that for any given increasing sequence of sets
B1 ⊆ B2 ⊆ B3 ⊆ ...
we have
We then returned to Ben's idea about how to measure sets from our first class: for any cover of a set E by countably many intervals we can get an upper bound on the measure of E by summing the lengths of all these intervals, so if we take the infimum of all such upper bounds we should get the measure of the set E. It turns out Ben's proposal has a name: the exterior measure, which we denoted m∗. This proposal seems very reasonable, but when we tried to apply this new definition to compute the measure of a closed interval [a,b], we already ran into some difficulties: how do we know there isn't some cover of [a,b] which produces a total length smaller than b-a? After struggling with this, we pinned down the difficulty: for any infinite cover of [a,b] by intervals, we need to know that there's some finite subcover, a fact which is far from obvious. This led us to review compactness, in particular giving a terse overview of Dirichlet's local-to-global approach to proving that a function that's continuous on [a,b] must be bounded on [a,b], and then recalling the Heine-Borel theorem. We'll pick up here next class.
Click here for video recording Click here for pdf of lecture notes
We continued our investigation of the exterior measure m∗ from last time. We warmed up by proving that m∗([a,b]) = b-a, a property any reasonable measure on ℝ should have. The main idea in the proof was, given a closed cover of [a,b], to enlarge each interval in the cover slightly to get an open cover of [a,b] and then apply Heine-Borel to extract a finite open subcover. Since it's finite and just ever so slightly larger than the original cover, we win. Next we proved some nice properties of the exterior measure. Monotonicity was straightforward; countable subadditivity was much less so. Next time we'll prove some other nice properties of the exterior measure.
Click here for video recording Click here for pdf of lecture notes
After reviewing our work from last class, we introduced and proved a new property of the exterior measure: a formula for the exterior measure of an arbitrary set in terms of the exterior measure of open supersets. This is useful because finding a cover of a nice open set can be much more tractable than finding a cover of an arbitrary set. To understand our result properly, we reviewed some basic topology, giving informal definitions of open, closed, boundary points, interior points, and exterior points, and then formalizing these in terms of open balls. We then reviewed a few basic (and very useful!) properties of open and closed sets, including the Heine-Borel theorem. Finally, we developed a proof strategy for our property (which, as before, involved imagining the existence of a perfect cover), and then introduced a few tweaks to make the proof rigorous. Next time we'll tackle the million dollar question: is the exterior measure a length function, in the sense we defined earlier? We'll see!
Click here for video recording Click here for pdf of lecture notes
We started by thinking about the strange-looking measure μ(A) defined in Problem Set 2. Why is it natural? We came up with a few interpretations. I then pointed out that it's possible to rewrite the definition of μ(A) in the form of a sum, and Gabriel recognized it as a Riemann sum for the characteristic function of A. In other words, μ is trying to compute an integral! We discussed why integrating a characteristic function of a set is a reasonable way to approach its length, and the pitfalls of this approach (because Riemann integration can't handle functions like the characteristic function of ℚ). Next, we returned to our study of the exterior measure, and attempted to prove that the measure of a disjoint union is the sum of the measures. The proof idea, while pretty, had some serious flaws, some of which we resolved but others of which seemed insuperable. Instead of searching for a more involved proof of the claim we were after, we changed the claim in such a way that our proof demonstrated it! Namely, we proved that if d(A,B) > 0, then m∗(A∪B) = m∗(A) + m∗(B). We then used this to prove something that, at first glance, doesn't appear to follow: that the exterior measure of a countable union of almost disjoint intervals is equal to the sum of the lengths of these intervals. We concluded the class by defining a fundamental notion: that of measurability. We'll pick up here next time.
Click here for video recording Click here for pdf of lecture notes
We started by reviewing the definition of measurable, and comparing it to property 3 of the exterior measure -- in words, they sound the same (you're trying to approximate a given set by an open set containing it), but the crucial difference is that property 3 asserts that this can always be done in such a way that the difference between the measures is small, whereas the definition of measurable is that the measure of the difference can be made small. One very important point is that being measurable has nothing to do with your exterior measure -- every set has a well-defined exterior measure! (Check this.) But measurable sets play nice with each other in ways that arbitrary sets might not. For example, countable unions of measurable sets are measurable, and we proved (with some effort) that the measure of a countably infinite union of disjoint measurable sets is equal to the sum of the measures of the individual sets. By contrast, non-measurable sets can have pretty bad behavior. For example, there exists a subset B of [0,1] that has exterior measure 1, but whose complement in [0,1] also has exterior measure 1. We will construct this set (and other non-measurable sets) later on, but it's known that they're all quite horrible -- Solovay proved that any non-measurable set must be constructed using the axiom of choice. This is reassuring, since it implies that any set you encounter in the wild will be measurable.
Click here for video recording Click here for pdf of lecture notes
We enunciated a general strategy for thinking about results in measure theory: first prove the results in a special case where all sets in sight are nice and measurable (e.g. open), and then approximate an arbitrary set by a nice one. So long as the costs incurred by the approximation aren't too great, you can still win! We then discussed the first problem from the last problem set, which has a fancy name: the Caratheodory criterion. Since you proved that it's equivalent to being (Lebesgue) measurable, we can use either the Caratheodory or the Lebesgue concept of measurability moving forward. While Lebesgue's definition is more intuitive, Caratheodory's has the advantage that it defined measurability purely in terms of the exterior measure; it therefore generalizes nicely to spaces in which the topology is complicated. Next we stated an proved the Monotone Convergence Theorem for Lebesgue measurable sets. Roughly speaking, this asserts that any monotone sequence of measurable sets Ak that converges to A satisfies m(A) = limk→∞ m(Ak). In other words, the Lebesgue measure commutes with limits for monotone sequences of measurable sets. After stating this more carefully and exploring the limitations of the theorem, we stated a nice proposition that allows us to approximate sets of finite measure by particularly nice sets: by compact sets, and by finite unions of cubes. We proved the latter.
Click here for video recording Click here for pdf of lecture notes
We noted that the set of all measurable sets in ℝn have some algebraic structure: we can `add' (union) measurable sets together to get measurable sets, `subtract' measurable sets to get a measurable set, `multiply' (intersect) measurable sets together. We even have an additive identity (the empty set) and a multiplicative identity (the whole space ℝn). This led to a quick overview of some of the main structures in abstract algebra: groups, rings, and fields. We then saw that while the space of all measurable sets fails to even be a group under union or intersection, it forms a ring under symmetric difference as addition and intersection as multiplication! We next generalized these nice algebraic structures to a σ-algebra: any collection of sets that is closed under countable unions and complements, and also contains the empty set. (The σ refers to the German Summe, or union.) Thus, the set of measurable sets is an example of a σ-algebra. A particularly nice σ-algebra, which you'll encounter when you take any course in probability, is the Borel σ-algebra: this is the smallest σ-algebra that contains all open sets. Why is this nice? Because if we know the measure of all open sets, this determines the measure of arbitrary sets. Although there exist measurable sets that are too weird to belong to the Borel σ-algebra of ℝ, they're only off by a set of measure zero. In particular, we discussed a couple of decently nice types of sets in a Borel σ-algebra -- called Fσ and Gδ sets -- that are only slightly more complicated than open and closed sets, but which serve as truly excellent approximations to any measurable set. We then reviewed the different types of approximations to measurable sets, and the associated cost of making each approximation.
Click here for video recording Click here for pdf of lecture notes
On your midterm, you proved a toy version of the 1-dimensional Banach-Tarski theorem. Today we stated a more general one: that given any two intervals, it's possible to cut one into countably many pieces that, when properly translated, form the other interval. We used this to prove a remarkable result: there doesn't exist any non-trivial measure on ℝ that's both countably additive and translation-invariant! This explains why we've been using the outer measure, and exploring measurable sets: the outer measure is translation-invariant and almost additive, and on an enormous collection of sets (the measurable ones) it has both desireable properties! We then stated Banach-Tarksi in 3d, which similarly shows that on ℝ3 one can't even have a non-trivial measure that's finitely additive and translation and rotation invariant. Having discussed the limitations of measure, we turned to limitations of measurability. We first gave a probabilistic construction of a non-measurable subset of [0,1], but this wasn't really rigorous and not everyone was convinced. We then proved that the Vitali set -- which you constructed on your midterm -- has positive exterior measure but is non-measurable. Along the way, we formalized the Axiom of Choice as an assertion about the existence of a `choice function'.
Click here for video recording Click here for pdf of lecture notes
Today we started on integration theory. Actually, we've already seen a bit of this earlier in the course, namely, we realized that a good potential way to measure a set is to integrate its characteristic function. Unfortunately, while this idea is very reasonable, it doesn't work: the characteristic function of the rationals between 0 and 1 isn't integrable! Or, more precisely and foreshadowy: it's not Riemann integrable. To explain this, we reviewed Riemann's theory of integration. Ben K realized this is strange, because it's possible to construct a sequence of functions that tends to the characteristic function of the rationals between 0 and 1, and each of these is Riemann integrable. Moreover, the integral of each of these is 0! This illustrates one of the most unfortunate defects of Riemann's theory of integration: it doesn't play nice with limits of functions. This is not a narrow issue -- it lies at the heart of Fourier analysis. We spent the end of class discussing this. Next time we'll introduce a different approach to integration that is strictly more powerful than Riemann's approach, and plays much more nicely with limits.
Click here for video recording Click here for pdf of lecture notes
Last time we discussed functions that weren't Riemann integrable. What functions are Riemann integrable? We sketched a proof that any continuous function on a closed interval is. Amazingly, the converse to this is essentially true: Lebesgue's criterion asserts that a function is Riemann integrable iff it's continuous almost everywhere. Among other things, this means there are loads of functions that aren't Riemann integrable. Do we just give up on integrating these functions? We went back to the drawing board and reviewed the original integration, due to Archimedes: his idea was to fill a region with triangles. Riemann's idea was to fill a region with vertical slices. Lebesgue's idea was to fill a region with horizontal slices. After some playing around, we formulated a conjecture for what the integral of a function f on [0,1] should look like: it should be a certain Riemann integral over measures of sets of the form {f ≥ y}. For this to be nicely behaved, we need to have all the sets {f ≥ y} be measurable! This motivated the definition of a measurable function. We finished by stating a bunch of equivalent definitions of measurable function.
Click here for video recording Click here for pdf of lecture notes
After reviewing the motivation for our definition of a (Lebesgue) measurable function, we stated and (sort of) proved a bunch of equivalent definitions for this. Next, in order to better understand the connection between continuous functions and measurable functions, we reviewed some properties of continuous functions. In particular, we saw that while the continuous image of an open set need not be open, and the continuous image of a closed set need not be closed, the inverse image of an open set must be open and the inverse image of a closed set must be closed. In fact, these conditions are equivalent to the continuity of a function. Similarly, a function is measurable iff the inverse image of any open set is measurable, iff the inverse image of any closed set is closed. This property implies that all continuous functions are measurable. Next, we saw that it's not true that the composition of two measurable functions is measurable, but that under certain hypotheses it is true. We concluded by discussing sequences of functions. Continuity isn't preserved under limits, suprema, infima, lim sup, or lim inf. By contrast, measurability is!
Click here for video recording Click here for pdf of lecture notes
We reviewed our analogies and non-analogies between measurable and continuous functions. Along the way, we discovered a nice explanation for the name "measurable function": the characteristic function of a set A is measurable iff the set A is measurable. We then considered a few questions about the behavior of measurable functions. For example, given a measurable function f, is the image of a measurable set a measurable set? (No!) What about the preimage? (No!) This is confusing, because if you look on Wikipedia you'll see that the definition of a measurable function from X to Y is that the preimage of any measurable set in Y is a measurable set in X. It turns out that this is the case for us as well, except that we're using the word measurable in two different ways: our measurable functions map ℝ (with "measurable" meaning all sets that are Lebesgue measurable) to ℝ (with "measurable" meaning all the Borel sets). In other words, when we say "measurable function" we should really be saying "Lebesgue-to-Borel measurable function". Probabilists deal with Borel-to-Borel measurable functions, because these have nicer behavior and probabilists only care about Borel sets anyway (any Lebesgue measurable set is basically a Borel set, up to an error of measure 0). We're dealing with Lebesgue-to-Borel sets in spite of the asymmetry, because it's a larger class of functions for which Lebesgue integration works. On HW you'll see the issue with using Lebesgue-to-Lebesgue measurable functions.
We concluded with a beautiful theorem of Cauchy's about the convergence of infinite sums of continuous functions to continuous functions, and went over Cauchy's proof. Stephen raised an objection to the proof, and his objection turns out to be quite important, since the proof is wrong! What could you do to correct Cauchy's proof? Think about it before Friday's class, and we'll discuss it then!
Click here for video recording Click here for pdf of lecture notes
After briefly reviewing Cauchy's false proof of his false theorem, we analyzed exactly where it went wrong. Although it was clear that the issue had something to do with how we chose the various parameters, it took a bit more time to see the precise issue: it was impossible to choose either of the two main parameters without having already chosen the other one! This led us to write down the precise condition which, if it held, would fix this issue and allow Cauchy's proof to go through. This condition has a name: uniform convergence. Thus, given a sequence of functions, there are two senses in which it might converge to a function: pointwise, or uniformly. We concluded by discussing Littlewoods's three principles; we'll state them more rigorously (and prove them!) next time.
Click here for video recording Click here for pdf of lecture notes
We stated Littlewood's three principles (about measure theory), which are heuristics for keeping track of what to expect. The first of these we've already seen a rigorous form of earlier; the other two are fundamental results called Egorov's theorem and Lusin's theorem, which we stated. We also explored the necessity of the hypotheses of these theorems, and concluded (by using the travling saleswave function) that the finite measure hypothesis in Egorov's theorem is necessary. We discussed a bit of the history -- both triumphant and sad -- of Egorov and his PhD student Lusin. We finished by outlining an approach to proving Egorov's theorem. While the approach has a flaw that makes it fail, it forms the key idea underlying the proof we'll present next time.
Click here for video recording Click here for pdf of lecture notes
We started by reviewing the almost-but-not-quite proof of Egorov's theorem from the end of last class. Then we adapted the idea to give an actual proof of the theorem. Lusin's theorem turned out to be a relatively straightforward consequence of Egorov's theorem, once we assumed a key lemma: that any measurable function is the pointwise limit a.e. of step functions. Although the ae restriction cannot be dropped, if we replace step functions with simple functions, it can be! We'll return to this next class.
Click here for video recording Click here for pdf of lecture notes
Today we proved two basic results about how to approximate measurable functions by nice functions. First, we proved that any measurable function f is the (pointwise) limit of a sequence of simple functions. We accomplished this in stages. We proved this result for non-negative f by constructing a sequence of functions fk that are all bounded and supported on a set of finite measure, and that converge pointwise to f. Then we constructed a simple function ψk that's extremely close to fk everywhere. As a nice bonus, our construction yields a monotonically increasing sequence of simple functions that converge to f pointwise. Having accomplished the task for non-negative f, we expressed an arbitrary f as a difference of two non-negative functions, and thus were able to easily deduce the result for arbitrary (measurable) f. We next turned to a marginally weaker result, which has the advantage of approximating f by an even simpler type of function: we proved that any measurable f is the pointwise a.e. limit of step functions. The proof is an ingenious application of the Borel-Cantelli lemma.
Click here for video recording Click here for pdf of lecture notes
Today we developed the theory of Lebesgue integration for simple functions. After discussing a few equivalent formulations of the notion of a simple function, we formally defined the integral of a simple function in the most natural way possible. There was a hitch, however -- our definition depends on the way we write down the simple function. With some effort, we proved that for any reasonable way of writing down a simple function, our definition produces the same integral. We then stated a number of nice properties satisfied by the Lebesgue integral for simple functions, the proofs of which are all fairly pedestrian. Finally, we outlined our approach for extending the Lebesgue integral to all measurable functions; it is precisely the same set of reductions as in our proof that any measurable function is the limit of simple functions.
Click here for video recording Click here for pdf of lecture notes
Last time we established an integration theory for simple functions. This is the first step in the standard machine: a sequence of proofs which first establish something in the case of simple functions, then extend it to the case of functions that are bounded and supported on a set of finite measure, then extend from there to all measurable non-negative functions, then from there to all measurable functions. We've already encountered this progression once before, in our proof that every measurable function is the limit of a sequence of simple functions (indeed, this proof is the prototype of the standard machine).
Accordingly, our next step in developing the theory of Lebesgue integration is to extend from simple functions to all functions that are Bounded and Supported on a set of finite measure (which I'll henceforth refer to as BS functions). Modeling our approach on the proof of the simple approximation lemma, we defined the integral of a BS function F to be the limit of integrals of any sequence of simple functions that (a) converge to F almost everywhere, (b) are all bounded by some real number M, and (c) are all supported on some set E of finite measure. A priori this definition isn't well-posed, since the limit of the integrals might not converge, and even if it did it might depend on our choice of simple functions satisfying (a), (b), and (c). Using Egorov's theorem, we were able to prove that our notion of the integral of a BS function is well-defined. Having extended Lebesgue integration to this wider class of functions, we proved a couple nice results. The first was that if the integral of a BS non-negative funtion is 0, that forces the function to be 0 almost everywhere. (Our proof was a collaborative effort, combining key insights from Alicia, Ben, other Ben, and Josh.) Another nice result we were able to prove is the Bounded Convergence Theorem, a hint of some remarkable results to come. The BCT asserts that for any sequence of measurable functions {fn}, the limit of the integrals of fn tends to the integral of F. (This is similar to our definition of the integral for any BS function, but now we're allowing our sequence to not consist of simple functions.) The proof was exactly the same trick using Egorov as we've already seen twice before. We also saw an example that showed that we can't mess too much with the hypotheses of the BCT.
Our Egorov-trick proof led us to a different notion of convergence, called L1 convergence (the L stands for Lebesgue). This is a fundamental concept in analysis, and we'll return to it soon. In the meantime, we derived a satisfying consequence of the BCT: that Riemann integration is subsumed by Lebesgue integration, in the sense that any Riemann integrable function is also Lebesgue integrable, and that the two integrals agree. The key insight is that this is trivially true for step functions, and that the Riemann integral is defined in terms of a limit of integrals of step functions; the BCT then allows us to take the limit inside the integral!
Click here for video recording Click here for pdf of lecture notes
Thus far we've developed a theory of integration for simple functions and for BS functions (i.e. that are Bounded and Supported on a set of finite measure). This was enough for us to prove some nice results: the Bounded Convergence Theorem (BCT henceforth) gives a nice LEO, which in turn allowed us to prove that Lebesgue integration subsumes Riemann integration. Today we extended integration to all non-negative measurable functions. Rather proceeding via limits as before, we used a supremum, and for good reason: in the limit, mass can be lost. We saw several examples of sequences of non-negative measurable functions that lost mass in the limit, but none that gained mass. This led us formulate a conjecture that the integral of the limit is always bounded above by the limit of the integral. We proved this passing from a sequence of non-negative functions to a sequence of BS functions, for which the BCT provides a LEO. As stated, this result only applies when the limits (of the functions and of the integrals) exist. However, it turns out the same proof gives a more general result which holds even when the limits don't exist: Fatou's Lemma. To understand the statement properly, we briefly reviewed the notions of lim inf and lim sup.
Click here for video recording Click here for pdf of lecture notes
Last time we proved Fatou's Lemma, a failed attempt at a LEO for sequences of non-negative measurable functions. Today we started by deriving from Fatou a power LEO: the Monotone Convergence Theorem. From the MCT we derived a nice corollary that infinite sums and integrals can be exchanged in the context of measurable non-negative functions. This in turn gave a very elegant proof of Borel-Cantelli. Next, we turned to thinking about integrable non-negative functions, i.e. those whose integral is finite. It's clear that any such function should tend to 0 as the magnitude of the input increases, but this turns out to be false. A version of this does hold, however: we proved that the integral over all the points outside a ball of radius r tends to 0 as r tends to infinity. A slight tweak in the argument produced a different result, that the integral grows in a continuous manner. (The formal name for the property we proved is absolute continuity.) Finally, we defined the Lebesgue integral for arbitrary measurable functions. To escape the annoying issue of ∞-∞, we restrict ourselves to integrable functions. We concluded by stating a fundamental LEO called the Dominated Convergence Theorem, which asserts that if a sequence of measurable functions converges and is dominated by some integrable function, then the integral of the limit equals the limit of the integrals. We'll prove this next class.
Click here for video recording Click here for pdf of lecture notes
We started by developing a proof of the Dominated Convergence Theorem, a cornerstone LEO of integration theory. The proof turned out to be a sort-of hybrid of the proofs of our two propositions from last class. As has become familiar, the proof produced a stronger result than simply a LEO: we proved that if a sequence of functions converges ae and is dominated by some integrable function, then the convergence continues to hold in L1. Motivated by the repeated appearance of such results, we formalized this by creating a space L1: it's basically the set of all integrable function on ℝd, except that we consider two function to be the same iff they're equal ae. L1 has a nice algebraic structure (it's a real vector space) but it also has an analytic structure: the integral of |f-g| serves as a metric on the space. Using this to measure the size of a single function led us to define the L1-norm. Finally, we proved the Riesz-Fischer theorem: L1 is complete with respect to this metric. (In other words, you never leave the space L1 by taking limits.) A consequence of our proof, related to a recent homework problem, is that if a sequence converges to f in L1, then a subsequence must converge to f ae.
No video recording!! Click here for pdf of lecture notes
After reviewing material (including rephrasing the Dominated Convergence Theorem as one type of convergence implying another), we resumed discussing the structure of L1. We talked about the idea of dense subspaces, and came up with a few examples. Then, motivated by thinking about vector spaces, we realized that to make L1 more geometric it would be nice to measure the `angle' between its elements... in other words, to define a dot product on L1. We came up with a natural one, but it wasn't defined on the entire space. So then we created a new space of functions, L2, for which the dot product holds by definition. But then we saw in the process that we had to change the metric on this space. We've lost some functions from L1 and gained a few new ones, but in the process we've created a really nice space in which we can measure length of elements and angles between two elements. Using this intuition, we formulated some conjectures, all of which turn out to be true. L2 also turns out to be complete with respect to the L2 metric (the proof is nearly the same as the proof we gave last class of the Riesz-Fischer theorem). L1 is a complete normed vector space, and is thus a prototypical example of a Banach space. L2 is a complete inner product space, and is therefore a prototypical Hilbert space.
Click here for video recording Click here for pdf of lecture notes