MATH 466: Additional comments related to material from the
class. If anyone wants to convert this to a blog, let me know. These additional
remarks are for your enjoyment, and will not be on homeworks or exams. These are
just meant to suggest additional topics worth considering, and I am happy to
discuss any of these further.
- Monday, Dec 4:
Isoperimetric Inequality and Green's Theorem:
https://youtu.be/6-aO8fT50Ms (Green's theorem in a day:
https://www.youtube.com/watch?v=Iq-Og1GAtOQ)
- Friday, Dec 1: Uncertainty Principle
(from 2013): http://youtu.be/y_AXASHUHE8
- Today was a big day for
physics, finally showing how the
Hesienberg
Uncertainty Principle is a consequence of properties of the Fourier
transform. This was a very fast paced lecture; in an ideal world we'd have
another 15 minutes. It was a tough choice, but I decided it's better to do it
in one day while everything is fresh then to split into several. This means
that we can't do all the steps in class together. I strongly
urge you to go through the final few steps yourself, and to revisit the last
few pages of the handout, if you want to see the nuts and the bolts. There are
a lot of calculations to do. This is exactly how lectures are in
graduate school. All the details are not done; the algebra or
standard calculations are often left for you. Thus you need to show \(\langle
Qf, f\rangle = \langle f, Qf\rangle\), and similarly for \(P\) (this is
equivalent to saying these two operators are
Hermitian);
the proof involves computing each separately and then integrating one of them
by parts. If you haven't done these calculations they're great exercises, and
they also explain why we defined \(P\) to be \(-i d/dx\) and not \(d/dx\) (the
presence of the \(-i\) makes it Hermitian). The last bit today was doing
algebra to look at \(1 = ||f||_2^2 = i\int_{-\infty}^\infty ((PQ - QP)f)(x) \overline{f(x)}
dx\), as \(PQ - QP = -iI\). Though we were a bit rushed, if you go through the
algebra and use the fact that the operators are Hermitian (so \(\langle Af, f\rangle
= \langle f, Af\rangle\) for \(A\) a Hermitian operator), and use Plancherel
to note \(||(Pf)||_2 = ||\widehat{(Pf)}||_2\) (up to factors of 2 and
\(\pi\)), we get the product of the variances is at least a universal
constant.
- Wednesday, Nov 29: Weierstrass
nowhere differentiable function, isoperimetric inequality:
- Weierstrass nowhere differentiable function:
https://en.wikipedia.org/wiki/Weierstrass_function
- Baire spaces / Baire category theorem:
https://en.wikipedia.org/wiki/Baire_space
- The key idea today was to write the Fourier coefficient as a nice
integral where we used the closed form expression for the Fejer kernel.
- One mistake today was deliberate to see if you were paying attention
(not having negative k). one was accidental (not having the square initially
in the Fejer kernel). The main point from today is to get a sense of how to
handle functions, and how to see what is reasonable in an equation. Spot
checks can find MANY mistakes. The goal was to break the analysis of our
integrand into different regions depending on its size. We use different
approximations in each. For |x| far from 0, we can replace the sine in the
numerator with 1; if instead |x| is within 1/N of zero we need to be more
careful, and that's where we use the bounds \(C_2 x \le \sin(\pi x) \leC_1
x\) for \(0 \le x \le 1/2\).
- Monday Nov 20:
GOE nxn case, Vandermonde Matrices:
https://youtu.be/ub4hcv30zpU
- Monday Nov 20:
GOE Joint Density I, Integration Challenge:
https://youtu.be/kUtFBg9IeGI
- Chi-square distribution:
https://en.wikipedia.org/wiki/Chi-squared_distribution
- Key idea today was the theory of integration constants; if we can sniff
out the x-dependence, then we can often determine the integral later as the
final item integrates to 1.
- The following comment
is very important (hence the color!). We could prove elegantly that
the sum of two normals is normal. Note the exponent of the exponential, after the dust
settles and we complete the square, has to look like \(az^2 + b(t - cz)^2\)
for some constants \(a, b, c\). The actual values do not matter; when we
integrate we can pull out the \(\exp(-az^2)\) and then shift \(t\) by \(cz\),
and then the \(t\)-integration and the other constants combine to give us
something that looks like \(C \exp(az^2) = C \exp(z^2/(2 \cdot 1/2a))\) for
some constants \(C, a\). However, we know that the mean of a sum of
independent random variables is the sum of the mean (and similarly for the
variance). Thus our distribution has the shape of a normal with mean \(0\) and
variance \(1/2a\). As this has to have mean \(0\) and variance \(\sigma_1^2 +
\sigma_2^2\) we must have \(1/2a = \sigma_1^2 + \sigma_2^2\) and thus
\(C = 1/\sqrt{2\pi(\sigma_1^2 + \sigma_2^2)}\). Thus, integration without
integrating is even more amazing than you might initially believe -- we don't
even need to keep track of the algebra!!!
- Here is the integration challenge: make this rigorous!
- Define \(\int f\) to be \(\int_0^x f(t)dt\).
- We want to solve \(\int f = f - 1\).
- Thus \(f - \int f = 1\).
- \((1 - \int)f = 1\).
- \(f = (1 - \int)^{-1} 1\).
- \(f = (1 + \int + \int\int + \int\int\int f + \cdots\).
- \(f = 1 + \int 1 + \int \int 1 + \int \int \int 1
+ \cdots\).
- As \(\int 1 = \int_0^x 1 dt = x^2/2\), and \(\int \int 1 = \int x^2/2 =
\int_0^x t^2/2 dt = x^3/6\), .... Thus if we have \(\int\) a total of \(n\)
times we get \(x^n/n!\).
- Thus \(f(x) = 1 + x + x^2/2! + x^3/3! + \cdots\). This looks interesting
-- can it be made rigorous? What is needed to justify the steps?
- Friday Nov 17:
Gaussian Orthogonal Ensemble (Part I):
https://youtu.be/sUdbCwrudL8
- Wednesday Nov 15:
Wigner Combinatorics IV, Distribution of Eigenvalues I:
https://youtu.be/pJShOxNRGZc
- Finished our recurrence integration, key idea was constantly multiplying
by 1 to convert double factorials into factorials, and then notice binomial
coefficients.
- We were able to find explicit, but painful, formulas for eigenvalues of
NxN matrices for N from 2 to 4 (the formula is pleasant for N=1, for N > 4
we no longer have closed form expressions). Thus we need a new idea to make
progress!
- Monday Nov 13:
Wigner Combinatorics III: Contributions of
Configurations and Moments of Semi-Circle:
https://youtu.be/Rzkzho023Hk
- Friday Nov 10:
Wigner Combinatorics and Catalan Numbers:
https://youtu.be/JmXX_N7oYLw
- Wednesday Nov 8:
Wigner Semi-Circle Law: Part I:
https://youtu.be/PvKZrvnAMAU
- Key idea today was degrees of freedom estimates. Once we figure out a
decent bound on the number of tuples with a given configuration, we can show
many of these cannot contribute in the limit. Its' important to distinguish
between the different parameter dependencies. As we are fixing k and letting
N go to infinity, we were fine with horrible bounds in terms of k, so long
as we could control the N dependence.
- Mon Nov 6:
Video did not record, can watch: Part I
(Classical RMT, Intro L-fns, Dirichlet): http://youtu.be/2PuUbk6gUMM (slides: part
1)
- Survey articles on number theory and random matrix theory:
- Nuclei, primes and the random matrix connection (with Frank W. K. Firk),
invited paper to Symmetry (1, (2009),
64--105; doi:10.3390/sym1010064) pdf
- From Quantum Systems to L-Functions: Pair Correlation Statistics and
Beyond (with Owen Barrett, Frank W. K. Firk and Caroline Turnage-Butterbaugh),
Open Problems in Mathematics (editors John Nash Jr. and Michael Th. Rassias,
Springer-Verlag, 2016). pdf
- Friday Nov 3:
Review of Eigenvalues and Eigenvectors II:
https://youtu.be/n3ipVnveNKw
- Wednesday Nov 1:
Review of Eigenvalues:
https://youtu.be/v6rUPqZI1KA
-
Monday Oct 30:
Gaps between independent, uniform random variables:
https://youtu.be/mRygXy9nJcM
- Wednesday Oct 18:
Recurrence Relations, Spacing Preliminaries:
https://youtu.be/_B-hOn-VEww
- Monday Oct 16:
Hershey
Game, Fibonacci numbers are Benford:
https://youtu.be/fP-zhIkXMmE
- Some facts on mono-variants (from our Operations Research class).
Monoinvariants are a wonderful topic, one often not seen (sadly) in the
curriculum.
- In the proof that the Fibonacci
numbers are Benford the final difficulty was making sure the small error
affecting almost nothing in the limit. For some good amount of extra
credit / HW exemption, prove that only finitely often does this small
term change the leading digit. NOTE: I don't know if it is only finitely
often!
- Wednesday Oct 11:
Weyl's
Proof of Kronecker's Theorem:
https://youtu.be/wZwawWjT0xI
- Friday Oct 6: Dirichlet's Theorem,
Liouville's Theorem: https://youtu.be/H2iA5VyHO3k
- Wednesday Oct 4:
Spreading Normal mod 1, Fibonaccis are Benford, Kronecker's Theorem:
https://youtu.be/WVWiWsfUlW8
- Monday Oct 2:
Gaussian Integral, Fourier Transform of the Normal Distribution:
https://youtu.be/FzqMInRzOew
- Friday, Sept 29:
Introduction to Benford's Law, Spreading Normal mod 1 is Uniform:
https://youtu.be/UQijNiZUlic
- Today was a technical lecture on how to constantly break up expressions,
how to do the analysis and bounding. Key tools are the MVT or Taylor
Expansion to replace a difference of a slowly varying function, breaking
sums into small and large n and handling each differently. Also saw how to
do much better than Chebyshev by using properties of the function of
interest.
- Wednesday Sept 27:
Fourier Transform, Poisson Summation: Part 1:
https://youtu.be/v7yP9UXuxY8 (audio lost last 12 minutes, see
https://youtu.be/2gfa5pvQ8kc for rest)
- Monday Sept 25:
Evaluating Sums, Extending Dirichlet's Theorem:
https://youtu.be/OOeBLBNrJJg
- Main point of today's lecture was seeing that a proof may be capable of
generalization with more work. It also shows us why certain concepts are
introduced. We see we don't need the full strength of differentiability,
just that a certain quotient is bounded.
- Basel problem: \(\sum 1/n^2\):
https://en.wikipedia.org/wiki/Basel_problem
- Riemann zeta function:
https://en.wikipedia.org/wiki/Riemann_zeta_function (and zeta(2) is the
Basel problem above).
- Irrationality measure and lower bounds for \(\pi(x)\) (with David Burt, Sam
Donow, Matthew Schiffman and Ben Wieland), to appear in the The
Pi Mu Epsilon Journal. pdf
Friday Sept 22:
Approximations to the Identity, Dirichlet Kernel, Fejer Kernel:
https://youtu.be/GI4VVJLUpXo
- Saw a lot about how to add zero, how to break a quantity into smaller
pieces that are easier to handle, can use different techniques to bound the
summand in different ranges.
- Saw the difficulties in absolute values. It's nice to be able to just
take the absolute value of the integrand and then use the Triangle
Inequality, but that is usually only a good idea if the integral is small
because the integrand is small (i.e., it's not small due to oscillation).
- Bessel's equality: generalization:
http://mathworld.wolfram.com/BesselsInequality.html and
https://en.wikipedia.org/wiki/Bessel%27s_inequality. Often called
Parseval's theorem when have equality; see
https://en.wikipedia.org/wiki/Parseval%27s_theorem.
- We used Bessel/Parseval to evaluate a sum. Interesting to now have a
sense of why \(\pi\) and its powers emerge. It's from summing the
frequencies.... The difficulty today is that it relates sums of squares.
Thus we need the Fourier Coefficients to be like \(1/n\) to have the sum of
squares involve \(1/n^2\). Unfortunately the sum of \(1/n\) has bad
convergence properties (it diverges!), so some care will be needed if we are
to prove \(\sum 1/n^2 = \pi^2/6\).
- Another nice point today was passing from knowing the sum over the odd
integers to knowing the sum over all values (and, of course, once we know
the sum over all values can we get the sum over just the evens?).
Wednesday Sept 20:
Fejer's Theorem, Dirichlet's Theorem:
https://youtu.be/QhUNSQuiV08
Monday Sept 18:
Approximations to the Identity, Dirichlet Kernel, Fejer Kernel:
https://youtu.be/GI4VVJLUpXo
Friday Sept 15: Video: Lecture
04:
Linear Regression, Win Streaks:
https://youtu.be/W2g7levI35o
- We talked a lot about issues with linear regression; the interesting bit
is that the first column of our data matrix is all 1's, and that resolved
the issue!
- Lot on generating functions and recurrence relation. These links are
from my probability class.
-
Generating functions and Convolutions (Discrete Random Variables): https://youtu.be/DQNhhNNhwy8
(2015 lecture: https://youtu.be/4wbai2-EdFU)
-
Trump splits, derangements, generating functions: https://youtu.be/MrvXpJJikEA
Wednesday, September 13.
Exponential Function, Generalizations with Matrices, Inner Products:
https://youtu.be/ZLMB5Jdccfk
- Exponential function:
https://en.wikipedia.org/wiki/Exponential_function
- Orthonormal basis:
https://en.wikipedia.org/wiki/Orthonormal_basis
- Lp-Spaces: https://en.wikipedia.org/wiki/Lp_space (see especially
https://en.wikipedia.org/wiki/Lp_space#Embeddings)
- Fourier series:
https://en.wikipedia.org/wiki/Fourier_series
- Bessel's inequality:
https://en.wikipedia.org/wiki/Bessel%27s_inequality (notice true in much
greater generality, not just about Fourier Series)
- Riemann-Lebesgue Lemma:
https://en.wikipedia.org/wiki/Riemann%E2%80%93Lebesgue_lemma (proof
https://en.wikipedia.org/wiki/Riemann%E2%80%93Lebesgue_lemma#Proof here
illustrates techniques of real analysis)
- One of the main results today is interchanging a derivative and an finite sum;
the challenge is when the sum is infinite.
Interchanging operations is one of the most important techniques, and
sometimes one of the most delicate. The first instance you encounter is
probably Fubini's
Theorem. Note it is not always permissible to exchange integrals or sums;
the difficulty is when infinities are involved. The standard condition is if
the integral (or sum) of the absolute value is finite then you are okay. For a
problem case, consider the double sequence (m, n >= 0) given by a_{m,n} = 0
for n < m or n > m + 1, a_{m,m} = 1 and a_{m,m+1} = -1. Another important case
is
differentiating under the integral sign; I strongly urge you to look at
the examples here of the integrals this allows you to evaluate.
Feynman was a master at
this technique. Today we discussed differentiating under the summation sign,
which is possible if the series converges absolutely in a disk. The key
concepts were the radius of convergence, which involves the notion of the
limit superior (or the
limsup). Lurking beneath all these proofs is a comparison test with a
geometric series.
- There are many nice ways to prove the geometric series formula. My
favorite involves a game of hoops with two players. A always gets a basket
with probability p, B with probability q (assume p, q < 1), and first to get a
basket wins. Let r = (1-p)(1-q) and x the probability A wins. Then x = p + rp
+ r2 p + r3 p + ...; these add the probabilities of A
winning on the first, second, third, ... attempt (as to get to the second
attempt both A and B must miss). I claim that x = p + rx, as after both miss
the probability A wins from this point onward is just x again! This gives x =
p / (1-r), or 1 + r + r2 + r3 + ... = 1/(1-r), the
geometric series formula! Using the memorylessness nature s a key ingredient
in many problems in economics. For more, see the
article on
martingales. I go through this lecture in Math/Stat 341
-
The geometric series formula only makes sense when \(|r| < 1\),
in which case \(1 + r + r^2 + \cdots = 1/(1-r)\); however, the right hand side makes sense for all
r other than 1. We say the function \(1/(1-r)\) is a(meromorphic)
continuation of
\(1+r+r^2+\cdots.\)
This means that they are equal when both are defined; however, \(1/(1-r)\) makes
sense for additional values of \(r\). Interpreting \(1+2+4+8+\cdots\) as \(-1\) or
\(1+2+3+4+5+ \cdots = -1/12\) actually DOES make sense, and arises in modern physics
and number theory (the latter is \(\zeta(1)\), where \(zeta(s)\) is the Riemann
zeta function)! We'll see \(\zeta(s)\) later in the course, as well as
analytic continuation.
-
Another
infinity to be aware of is the derivative of a sum is the sum of the
derivatives. If we only have finitely many terms in our sum it's true,
and follows immediately from the case of just two terms from grouping. We have
(f + g + h)' = (f + (g + h))' = f' + (g + h)' (using the rule for the
derivative of a sum of two terms) = f' + (g' + h') (again using the rule for
the derivative of the sum of two terms) = f' +g' + h' (as we can drop
parentheses by associativity). We can continue by induction and get the
derivative of a finite sum is the finite sum of the derivatives, but we do
not get the derivative of an infinite sum is the sum of the derivatives.
This is a great example of how to really milk a result for all it's worth, as
well as the dangers of being too greedy.
-
Here is a video
from Cameron when he was 2, explaining how to switch sums.
-
More
videos:
Monday, September 11.
Exponential Function, Generalizations with Matrices, Inner Products:
https://youtu.be/ZLMB5Jdccfk
Friday, September 8. Welcome to
Advanced Applied Analysis.
- Video:
Introduction, Chapter 70 Aid, Indians' Streak:
https://youtu.be/jMnzNIBv1Qg;
first day:
slides
handout
- Notes on the Method of Least Squares:
https://web.williams.edu/Mathematics/sjmiller/public_html/probabilitylifesaver/MethodLeastSquares.pdf
- Linear regression:
https://en.wikipedia.org/wiki/Linear_regression
- Chapter 70 aid: http://www.doe.mass.edu/finance/chapter70/
- Cleveland Indians and Insurance:
http://www.espn.com/mlb/story/_/id/20623379/local-window-company-give-17-million-customer-rebates-cleveland-indians-win-streak-reaches-15
- Often
when you assume more you can do more. Newton's
method is
significantly more powerful than divide
and conquer (also
called the bisecting algorithm); this is not surprising as it assumes more
information about the function of interest (namely, differentiability). The
numerical stability of Newton's method leads to many fascinating problems.
One terrific example is looking at roots in the complex plane of a
polynomial. We assign each root a different color (other than purple), and
then given any point in the complex plane, we apply Newton's method to that
point repeatedly until one of two things happen: it converges to a root or
it diverges. If the iterates of our point converges to a root, we color our
point the same color as that root, else we color it purple. This leads to Newton
fractals,
where two points extremely close to each other can be colored differently,
with remarkable behavior as you zoom in. If you're interested in more
information, let me know; a good chaos program is xaos (I
have other links to such programs for those interested). One final aside: it
is often important to evaluate these polynomials rapidly; naive substitution
is often too slow, and Horner's
algorithm is
frequently used.
- We mentioned the exponential function
briefly at the end of class today.
Recall the exponential
function exp is defined by
\(e^z
= \exp(z) = \sum_{n = 0}^{\infty} z^n/n!\). This series converges for all \(z\). The
notation suggests that \(e^z e^w = e^(z+w)\); this is true, but it needs to be
proved. (What we have is an equality of three infinite sums; the proof uses
the binomial theorem.)
Using the Taylor series expansions for cosine
and sine, we find e^(iθ) = cos θ + i sin θ. From this we find |e^(iθ)| =
1; in fact, we can use these ideas to prove all trigonometric identities! For
example:
- Inputs: e^(iθ) = cos θ + i sin θ and e^(iθ)
e^(iφ) = e^(i (θ+φ))
- Identity: from e^(iθ) e^(iφ) = e^(i (θ+φ))
we get, upon substituting in the first identity, that (cos θ + i sin θ) (cos
φ + i sin φ) = cos(θ+φ) + i sin(θ+φ). Expanding the left hand side gives (cos
θ cos φ - sin θ sin φ) + i (sin θ cos φ + cos θ sin φ) = cos(θ+φ) + i
sin(θ+φ). Equating the real parts and the imaginary parts gives the
identities
- One can prove other identities along these
lines....