Additional comments related to material from the class. If anyone wants to convert this to a blog, let me know. These additional remarks are for your enjoyment, and will not be on homeworks or exams. These are just meant to suggest additional topics worth considering, and I am happy to discuss any of these further.
Here is a Mathematica program for sums of standardized Poisson random variables. The manipulate feature is very nice, and allows you to see how answers depend on parameters.
We proved the CLT in the special case of sums of independent Poisson random variables (click here for a handout with the details of this calculation, or see our textbook). The proof technique there used many ingredients in typical analysis proofs. Specifically, we Taylor expand, use common functions, and somehow argue that the higher order terms do not matter in the limit with respect to the main term (though they crucially affect the rate of convergence). We also got to take the logarithm of a product.
Here is a nice video on the Fibonacci numbers in nature: http://www.youtube.com/watch?v=J7VOA8NxhWY
There are many ways to prove Binet's formula for an explicit, closed form expression for the n-th Fibonacci number. One is through divine inspiration, the second through generating functions and partial fractions. Generating functions occur in a variety of problems; there are many applications near and dear to me in number theory (such as attacking the Goldbach or Twin Prime Problem via the Circle Method). The great utility of Binet's formula is we can jump to any Fibonacci number without having to compute all the intermediate ones. Even though it might be hard to work with such large numbers, we can jump to the trillionth (and if we take logarithms then we can specify it quite well).
We will do a lot more with generating functions. It's amazing how well they allow us to pass from local information (the \(a_n\)'s) to global information (the \(G_a\)'s) and then back to local information (the \(a_n\)'s)! The trick, of course, is to be able to work with \(G_a\) and extract information about the \(a_n\)'s. Fortunately, there are lots of techniques for this. In fact, we can see why this is so useful. When we create a function from our sequence, all of a sudden the power and methods of calculus and real analysis are available. This is similar to the gain in extrapolating the factorial function to the Gamma function. Later we'll see the benefit of going one step further, into the complex plane!
Today we saw more properties of generating functions. The miracle continues -- they provide a powerful way to handle the algebra. For example, we could prove the sum of two independent Poisson random variables is Poisson by looking at the generating function and using our uniqueness result; we sadly don't have something similar in the continuous case (complex analysis is needed). We saw how to get a closed form expression for Fibonacci numbers, and next class will do \(\sum_{m=0}^n \left({n \atop m}\right) = \left({2n \atop n}\right)\). We compared probability generating functions and moment generating functions, and talking about where the algebra is easier.
The main item to discuss is that if \(X\) is a random variable taking on non-negative integer values then if we let \(a_n = {\rm Prob}(X = n)\) we can interpret \(G_a(s) = E[s^X]\). This is a great definition, and allowed us to easily reprove many of our results.
The idea of noticing a given expression can be rewritten in an equivalent way for some values of the parameter, but that expression means something else for other values, is related to the important concept of analytic or meromorphic continuation, one of the big results / techniques in complex analysis. The geometric series formula only makes sense when |r| < 1, in which case 1 + r + r^2 + ... = 1/(1-r); however, the right hand side makes sense for all r other than 1. We say the function 1/(1-r) is a (meromorphic) continuation of 1+r+r^2+.... This means that they are equal when both are defined; however, 1/(1-r) makes sense for additional values of r. Interpreting 1+2+4+8+.... as -1 or 1+2+3+4+5+... a -1/12 actually DOES make sense, and arises in modern physics and number theory (the latter is zeta(1), where zeta(s) is the Riemann zeta function)!
For analytic continuation we need some ingredient to let us get another expression. It's thus worth asking what the source of the analytic continuation is. For the geometric series, it's the geometric series formula. For the Gamma function, it's integration by parts; this led us to the formula Gamma(s+1) = s Gamma(s). For the Riemann zeta function, it's the Poisson summation formula, which relates sums of a nice function at integer arguments to sums of its Fourier transform at integer arguments. There are many proofs of this result. In my book on number theory, I prove it by considering the periodic function \(F(x) = \sum_{n = -\infty}^\infty f(x+n)\). This function is clearly periodic with period 1 (if f decays nicely). Assuming f, f' and f'' have reasonable decay, the result now follows from facts about pointwise convergence of Fourier series.
Briefly, the reason generating are so useful is that they build up a nice function from data we can control, and we can extract the information we need without too much trouble. There are lots of different formulations, but the most important is that they are well-behaved with respect to convolution (the generating function of a convolution is the product of the generating functions).
Riemann Zeta Function
We gave a rough argument (doing a contour integral) that connects the distribution of zeros of \(\zeta(s)\) to counting the number of primes. This highlights both the importance of an Euler product, and taking a logarithm. To make the argument rigorous, however, we want a better analytic continuation as we want to shift one of the contour lines to negative infinity; this highlights the need for a better argument than the alternating zeta function we discussed last time. To do this will require the Gamma function, which is the generalization of the factorial function.
Elementary proof of the Prime Number Theorem: https://www.math.lsu.edu/~mahlburg/teaching/handouts/2014-7230/Selberg-ElemPNT1949.pdf (Selberg's paper) and https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1063042/ (Erdos' paper)
The dispute / controversy around the elementary proof: http://www.math.columbia.edu/~goldfeld/ErdosSelbergDispute.pdf
Stirling's Formula
We first proved some basic properties of the Gamma function: https://en.wikipedia.org/wiki/Gamma_function
We gave a poor mathematician's analysis of the size of n!; the best result is Stirling's formula which gives \(n!\) is about \(n^n e^{-n} \sqrt{2 \pi n} (1 +\) error of size \(1/12n + \cdots)\). The standard way to get upper and lower bounds is by using the comparison method in calculus (basically the integral test); we could get a better result by using a better summation formula, say Simpson's method or Euler-Maclaurin; we'll do all this on Wednesday. We might return to Simpson's method later in the course, as one proof of it involves techniques that lead to the creation of low(er) risk portfolios! Ah, so much that we can do once we learn expectation..... Of course, our analysis above is not for \(n!\) but rather \(\log(n!) = \log 1 + \cdots + \log n\); summifying a problem is a very important technique, and one of the reasons the logarithm shows up so frequently. If you are interested, let me know as this is related to research of mine on Benford's law of digit bias.
It wasn't too hard to get a good upper bound; the lower bound required work. We first just had \(n < n!\), which is quite poor. We then improved that to \(2^{n-1} < n!\), or more generally eventually \(c^n < n!\) for any fixed \(c\). This starts to give a sense of how rapidly \(n!\) grows. We then had a major advance when we split the numbers \(1, \dots, n\) into two halves, and got \(2^{n/2-1} (n/2)^{n/2 - 1}\), which gives a lower bound of essentially \(n^{n/2} = (\sqrt{n})^n\). While we want \(n/e\), \(\sqrt{n}\) isn't horrible, and with more work this can be improved.
Instead of approximately all numbers in \(n/2, \dots, n\) with \(n\) we saw we could do much better by using the `Farmer Brown' problem, and noting that if we pair things so that the sums are constant, the largest product comes from the middle, and thus each pair is dominated by \(((3n/4)^2)^{n/4}\). By splitting into four intervals we got an upper bound of approximately \(n^n 2.499^{-n}\), pretty close to \(n^n e^{-n}\).
There are other approaches to proving Stirling; the fact that \(\Gamma(n+1) = n!\) allows us to use techniques from real analysis / complex analysis to get Stirling by analyzing the integral. This is the Method of Stationary Phase (or the Method of Steepest Descent), very powerful and popular in mathematical physics. See Mathworld for this approach, or page 29 of my handout here.
M = {{-1,r Cos[t] + r I Sin[t]},{ -r Cos[t] + r I Sin[t], 1}};
R = {{Cos[u] + I Sin[u], 0},{0,1}};
F = {{-1, I},{1,I}};
G = {{-I,I},{1,1}};
fac = Cos[u/2]+I Sin[u/2];
H = (G .(R. M) . F) / (fac Sqrt[4 - 4 r^2]);
Simplify[Det[H]]
Simplify[MatrixForm[ H], Assumptions-> {Element[t,Reals], Element[u, Reals]}]
({
{(r Sin[t+u/2]+Sin[u/2])/,
(r Cos[t+u/2]-Cos[u/2])/
},
{(r Cos[t+u/2]+Cos[u/2])/,
(-r Sin[t+u/2]+Sin[u/2])/
}
})
f[n_] := Integrate[1/(1+x^n), {x,0,Infinity}] If[Mod[n,2] == 1,1,2]
For[n = 2, n <= 10, n++, Print["n = ", n, " and integral is ", f[n], "."]]