INFORMATION ON READING BEFORE CLASS
Below are
some comments to help you prepare for each class' lecture. For each section in
the book, I'll mention what you should have read for class. In other words, what
are the key points. When you come to class, you should have already read the
section and have some sense of the definitions of the terms we'll study and the
results we'll prove. This does not mean you should know the material well enough
to give the lecture; it does mean that you should have a familiarity with the
material so that when I lecture on the math, it won't be your first exposure to
the terminology or results. Everyone processes and learns material in different
ways; for me, I find it very hard to go to a lecture on a subject
I'm unfamiliar with and get much out of it. I need to have some sense of what
will happen, as otherwise I spend too much time absorbing the definitions, and
then I fall behind. I'm hoping the bullet points below will help you in
preparing for each lecture. If there is anything else I can do to assist, as
always let me know (either email directly, or anonymously through
mathephs@gmail.com, passsword 11235813).
Also, you may
wish to look at some worked out examples before class that are similar to the
HW. These are
available online here; I will do many of these problems in class. The reason
I want to do these is precisely because I have written up the solution. This way
you can sit back a bit more and follow the example without worrying about
writing everything down.
CHAPTER 2: DIFFERENTIATION
-
Section 2.6: Gradients and Directional Derivatives:
-
The
definition of the gradient (page 163). Note the gradient is the derivative
of a function from Rn to R; it is a vector with n components,
where the ith component is the partial derivative of f in the
direction of the ith coordinate axis.
-
The
definition of the directional derivative (page 164): This generalizes the
partial derivatives we've discussed, and allows us to look at how a function
is changing along an arbitrary line (but not an arbitrary curve). The
definition even suggests a way to compute the directional derivative: use
the first special case of the chain rule (Section 2.5, page 153).
-
Theorem
12 (page 165): This is the simplest way to compute the directional
derivative; it states that the directional derivative of f in the direction
of v at the point x is just the dot product of the gradient of
f and v; in other words, the directional derivative is (
f)(x)
• v.
- Geometric interpretation of the gradient:
Theorem 13 (page 166) states that the gradient points in the direction where
f is increasing the fastest, while Theorem 14 (page 167) tells us that the
gradient is perpendicular to level surfaces (we'll discuss this in much
greater detail in class). These two items will be of great aid in
optimization problems in Chapter 3.
CHAPTER 3: HIGHER-ORDER DERIVATIVES: MAXIMA AND
MINIMA
- Section 3.1: Iterated
Partial Derivatives:
- The definition of mixed partial derivatives
(page 182): Given a function f, we can compute its partial derivatives, such
as δf/δx and δf/δy. We can then take the partial derivatives of the partial
derivatives: δ(δf/δx)δy and δ(δf/δy)δx. In the first, we first take the
derivative with respect to x, and then take the derivative with respect to
y; in the second, we take the derivatives in the other order. Does the order
matter? We write δ2f/δyδx for δ(δf/δx)δy; thus the derivative
symbol on the far right of the denominator is the derivative we take first,
and the symbol on the far left is what we take last.
- The definition of C2, the class
of twice continuously differentiable functions (page 182): If the function
is C2, the mixed partial derivatives of second order (i.e.,
involving at most two derivatives) exist and are continuous. You should
compare this to the definition of C1 on page 138. Just as C1
functions had nice properties (being C1 means the partial
derivatives exist and are continuous, which implies the function is
differentiable), being C2 has nice properties.
- Equality of Mixed Partial Derivatives
(Theorem 1, page 183): For a C2 function, the order of
differentiation does not matter; in other words, δ2f/δyδx = δ2f/δxδy.
- Notation: fx means δf/δx, fxy
means (fx)y, which is δ2f/δyδx. Note that
the order of subscripts is the opposite of the order of differentiation;
fortunately if f is C2 then the order does not matter!
- Examples of partial differential equations:
The rest of the section is devoted to examples of partial differential
equations. Solving these in general are beyond the scope of this course; in
fact, most are beyond the scope of humanity! One example is the
Millenium Prize for the
Navier-Stokes equation (i.e., solve this and earn $1,000,000). You are
not responsible for any of this material; it is provided in nice detail in
this book for your interest, and to show you what you will see if you
continue with mathematics.
- Section 3.2: Taylor's
Theorem:
- In addition to reading our book,
see also my lecture notes on Taylor's Theorem in one-variable. I derive
Taylor's theorem from (surprise!) the Mean Value Theorem.
- Taylor's Theorem in one-variable: You
should read the statement of Taylor's theorem in one variable (equation 1 on
page 194). I will present a proof of a special case using the mean value
theorem, and thus you may ignore (if you wish) the proof from page 194
to195).
- Taylor's Theorem in several variables
(Theorems 2 and 3, page 196): You should be aware of the statement. The top
of page 197 gives a nice interpretation of the result in terms of matrices.
We'll talk a little about this, but as linear algebra is not a
pre-requisite, we will not delve into great detail. Knowing the statements
of Theorems 2 and 3 is enough; you should return to this section after
linear algebra.
- Ignore all forms of the remainder when
reading this section, unless you want to know more about this. Thus you do
not need to read pages 198 and the top of 199. Instead of using the integral
version of the MVT, I prefer giving proofs using the standard formulation.
- Examples: You should read the examples from
pages 199 to 202. I'll discuss some of these and related problems in class;
I'll also show you a nice `trick' that allows you to compute certain Taylor
series expansions a lot faster than the standard method; though it only
works some of the time, when it does it is quite fast.
- Section 3.3: Extrema of
Real-Valued Functions:
- Fun reading: the historical part at the
beginning of the section is for your enjoyment only. This is continued in
the historical note starting on page 267.
- Definition of local maximum / minimum (page
207): You should be comfortable with the definition of a local max/min.
Essentially, a point x0 is a local maximum if there is
some ball centered at x0 such that f(x0)
is at least as large as f(x) for all other x in that ball. For
example, f(x,y) = y2 sin2(xy) has a local minimum at (x,y)
= (0,0). Clearly f(x,y) is never negative, and it is zero at (0,0). Thus
(0,0) is a local minimum. Note that f(x,0) is also zero for any
choice of x. Thus to be a local minimum we don't need to be strictly less
than all other nearby points.
- First derivative test for local extrema
(Theorem 4, page 208): The generalization of the results from one-variable
calculus; candidates for max/min are where the first derivative (the
gradient) vanishes.
- Examples (pages 209 - 210): you should look
at these examples on how to find max/min.
- Do not read pages 211 - 220. We will not
cover these in class; however, once you know linear algebra these results
make more sense (and can be rewritten in a more illuminating manner). We
will discuss this in the advanced class.
- Summary (page 221): Look at the summary on
the top of page 221 and the subsequent example. This provides a nice recap
of the method to find maxima/minima.
- Section 3.4: Constrained
Extrema and Lagrange Multipliers:
- The method of Lagrange Multipliers (Theorem
8, page 226): This is the key result: it says that if we want to find the
extrema for a function f whose input x is the level set of some value
for a function g (i.e., find the max/min of f(x) given that g(x)
= c for a fixed c), then this happens at points where the direction of the
gradient of f is the same as the direction of the gradient of g. We may
rewrite this condition and say that x0 is an
extremum for f with our constraints if there is some number λ such that
(
f)(x0)
= λ
(
g)(x0).
- Examples (pages 228 - 231): Read these examples on how to use Lagrange
multipliers.
- Caveats: Existence of solutions (page 231): You should read the story
about Dorothy Sayers' detective. It illustrates in a nice way a common
danger: just because we prove that the only candidates for a max/min are the
critical points, the function may not have a max/min! Theorem 7 (page 220 of
Section 3.3) states when we must have a max/min.
- I will try and do several examples of applications of Lagrange
multipliers.
- Section 3.5: The Implicit
Function Theorem:
- We will not cover this section.
- Special Topic: Method of
Least Squares
- We will give one of the most important
applications of partial derivatives and optimization, the Method of Least
Squares. This is a technique to allow us to find best fit parameters.
Finding such values is central in numerous disciplines. Specifically, we
have some data and we want to see if it fits our theory. If you have a data
set you'd like analyzed, please let me know.
CHAPTER 4: VECTOR VALUED FUNCTIONS: will not
cover this chapter in the main lectures
CHAPTER 5: DOUBLE AND TRIPLE INTEGRALS
- General Comments:
- As many people have not seen a proof of the
Fundamental Theorem of Calculus, I will prove this important result in full
detail in class, and merely state what happens in several variables. We will
loosely follow the book for this chapter. The reason is that, as we only
have 12 weeks, we do not have time to delve fully into the theory of double
and triple integrals. Instead, for this chapter we will concentrate on the
applications, namely becoming proficient at computing these integrals.
- Section 5.1: Introduction
- For us, all that matters is the definition
of the double integral (the box on page 318). I encourage you to skim the
rest of the section, but this is all that matters for us.
- Section 5.2: The Double
Integral over a Rectangle
- The definition of the double integral is
very important (pages 327-328); we'll discuss in great depth the
corresponding framework in one-dimension.
- Know Theorem 1, page 329 (any continuous function on a closed rectangle,
such as [a,b] x [c,d] with a,b,c,d finite) is integrable. We will discuss
the proof of a related, simpler statement. We will not prove this result in
full generality, though the proof is in the book if you wish to read it /
discuss it with me.
- Be aware of the four properties of integrals on page 331 (linearity,
homogeneity, monotonicity and additivity). The proofs are similar to the
proofs in the 1-dimensional case.
- Know the statement of Fubini's Theorem (page 334) about when you can
interchange orders of integration. We will not do the proof in class, though
it is in the book.
- Section 5.3: The Double
Integral over more general Regions
- Know the definition of the following terms:
boundar, x-simple, y-simple, simple, and elementary region (page 341).
- Know the definition of the integral over an
elementary region (pages 342-343).
- Know Theorem 4 (Reduction to Iterated Integrals, page 344): this allows
us to compute integrals over elementary regions. See also the example on
page 345.
- Section 5.4: Changing the
Order of Integration
- The key result of this section is the
formula (sadly not boxed) at the top of page 349, which tells us how to
switch the order of integration.
- Read example 1, page 349: great illustration of how changing orders can
help.
- Section 5.5: The Triple
Integral
- We will not cover this section in the
lectures.
- Special Topic: Monte Carlo
Integration
- Monte Carlo Integration has been called one
of the most useful results of 20th century mathematics. We'll discuss how it
is done. It is an alternative to standard integration. Normally we look for
anti-derivatives; however, in the real world most functions we encounter do
not have nice anti-derivatives; Monte Carlo Integration provides a way to
approximate these integrals.
- The lecture notes for this is not the book,
but rather my lecture notes (the
last three pages of my chapter 3 notes, namely pages 36-38).
CHAPTER 6: CHANGE OF VARIABLE FORMULA AND
APPLICATIONS OF INTEGRATION
- Section 6.2: The Change of
Variable Theorem
- Know the definition of Jacobian
determinants (page 377); see example 1 of that page.
- Read the statement of the Change of
Variables formula for double integrals on page 382. We will not deal with
this theorem in its full generality, but I want you to at least be aware of
its statement. We will concentrate on several special cases: polar
coordinates (page 383), cylindrical coordinates (page 388), and spherical
coordinates (page 389). Feel free to skip the rest of this section (as well
as the rest of this chapter).
CHAPTER 10 (Cain and Herod): SEQUENCES, SERIES
AND ALL THAT:
notes available here.
- 10.1: Introduction
- Just know that one motivation comes from
Taylor series.
- 10.2: Sequences
- Know the definition of a sequence and some
common examples.
- Know what it means for a sequence to
converge.
- 10.3: Series
- Know the definition of a series.
- Know what it means for a series to converge.
- Know the definition of the harmonic series.
- Know the integral test.
- 10.4: More Series
- Know the definition of a positive series.
- Know the comparison test for convergence
(it's the method of this section; they don't call it the comparison test till
the next section).
- 10.5: Even More Series
- Know the ratio test for convergence.
- 10.6: A Final Remark
- Know the alternating test for convergence.
Also, you may
wish to look at some worked out examples before class that are similar to the
HW. These are
available online here; I will do many of these problems in class. The reason
I want to do these is precisely because I have written up the solution. This way
you can sit back a bit more and follow the example without worrying about
writing everything down.