Additional comments related to material from the class. If anyone wants to convert this to a blog, let me know. These additional remarks are for your enjoyment, and will not be on homeworks or exams. These are just meant to suggest additional topics worth considering, and I am happy to discuss any of these further.
\(\int f = f - 1\) ==> \(f - \int f = 1.\)
Thus \((1 - \int) f = 1.\)
Therefore \(f = (1 - \int)^{-1} 1.\)
Using the geometric series expansion \((1-r)^{-1} = 1 + r + r^2 + \cdots\) with \( r = \int\) we find \(f = 1 + \int 1 + \int \int 1 + \int \int \int 1 + \cdots.\)
Now \(\int 1 = \int_{t=0}^{x} 1 dt = x.\)
Now \(\int \int 1 = \int(\int 1) = \int_{t=0}^{x} t dt = x^2/2 = x^2/2!.\)
Now \(\int \int \int 1 = \int(\ \int(\int 1)\ ) = \int_{t = 0}^{x} t^2/2 dt = x^3/3!.\)
We find \(f = f(x) = 1 + x + x^2/2! + x^3/3! + \cdots = e^x.\)
The big question is: can this be made rigorous, and if so how? Happy Thanksgiving!
Here is a Mathematica program for sums of standardized Poisson random variables. The manipulate feature is very nice, and allows you to see how answers depend on parameters.
We proved the CLT in the special case of sums of independent Poisson random variables (click here for a handout with the details of this calculation, or see our textbook). The proof technique there used many ingredients in typical analysis proofs. Specifically, we Taylor expand, use common functions, and somehow argue that the higher order terms do not matter in the limit with respect to the main term (though they crucially affect the rate of convergence). We also got to take the logarithm of a product.
Here is a nice video on the Fibonacci numbers in nature: http://www.youtube.com/watch?v=J7VOA8NxhWY
There are many ways to prove Binet's formula for an explicit, closed form expression for the n-th Fibonacci number. One is through divine inspiration, the second through generating functions and partial fractions. Generating functions occur in a variety of problems; there are many applications near and dear to me in number theory (such as attacking the Goldbach or Twin Prime Problem via the Circle Method). The great utility of Binet's formula is we can jump to any Fibonacci number without having to compute all the intermediate ones. Even though it might be hard to work with such large numbers, we can jump to the trillionth (and if we take logarithms then we can specify it quite well).
We will do a lot more with generating functions. It's amazing how well they allow us to pass from local information (the \(a_n\)'s) to global information (the \(G_a\)'s) and then back to local information (the \(a_n\)'s)! The trick, of course, is to be able to work with \(G_a\) and extract information about the \(a_n\)'s. Fortunately, there are lots of techniques for this. In fact, we can see why this is so useful. When we create a function from our sequence, all of a sudden the power and methods of calculus and real analysis are available. This is similar to the gain in extrapolating the factorial function to the Gamma function. Later we'll see the benefit of going one step further, into the complex plane!
Today we saw more properties of generating functions. The miracle continues -- they provide a powerful way to handle the algebra. For example, we could prove the sum of two independent Poisson random variables is Poisson by looking at the generating function and using our uniqueness result; we sadly don't have something similar in the continuous case (complex analysis is needed). We saw how to get a closed form expression for Fibonacci numbers, and next class will do \(\sum_{m=0}^n \left({n \atop m}\right) = \left({2n \atop n}\right)\). We compared probability generating functions and moment generating functions, and talking about where the algebra is easier.
The main item to discuss is that if \(X\) is a random variable taking on non-negative integer values then if we let \(a_n = {\rm Prob}(X = n)\) we can interpret \(G_a(s) = E[s^X]\). This is a great definition, and allowed us to easily reprove many of our results.
The idea of noticing a given expression can be rewritten in an equivalent way for some values of the parameter, but that expression means something else for other values, is related to the important concept of analytic or meromorphic continuation, one of the big results / techniques in complex analysis. The geometric series formula only makes sense when |r| < 1, in which case 1 + r + r^2 + ... = 1/(1-r); however, the right hand side makes sense for all r other than 1. We say the function 1/(1-r) is a (meromorphic) continuation of 1+r+r^2+.... This means that they are equal when both are defined; however, 1/(1-r) makes sense for additional values of r. Interpreting 1+2+4+8+.... as -1 or 1+2+3+4+5+... a -1/12 actually DOES make sense, and arises in modern physics and number theory (the latter is zeta(1), where zeta(s) is the Riemann zeta function)!
For analytic continuation we need some ingredient to let us get another expression. It's thus worth asking what the source of the analytic continuation is. For the geometric series, it's the geometric series formula. For the Gamma function, it's integration by parts; this led us to the formula Gamma(s+1) = s Gamma(s). For the Riemann zeta function, it's the Poisson summation formula, which relates sums of a nice function at integer arguments to sums of its Fourier transform at integer arguments. There are many proofs of this result. In my book on number theory, I prove it by considering the periodic function \(F(x) = \sum_{n = -\infty}^\infty f(x+n)\). This function is clearly periodic with period 1 (if f decays nicely). Assuming f, f' and f'' have reasonable decay, the result now follows from facts about pointwise convergence of Fourier series.
Briefly, the reason generating are so useful is that they build up a nice function from data we can control, and we can extract the information we need without too much trouble. There are lots of different formulations, but the most important is that they are well-behaved with respect to convolution (the generating function of a convolution is the product of the generating functions).
\(\int f = f - 1\) ==> \(f - \int f = 1.\)
Thus \((1 - \int) f = 1.\)
Therefore \(f = (1 - \int)^{-1} 1.\)
Using the geometric series expansion \((1-r)^{-1} = 1 + r + r^2 + \cdots\) with \( r = \int\) we find \(f = 1 + \int 1 + \int \int 1 + \int \int \int 1 + \cdots.\)
Now \(\int 1 = \int_{t=0}^{x} 1 dt = x.\)
Now \(\int \int 1 = \int(\int 1) = \int_{t=0}^{x} t dt = x^2/2 = x^2/2!.\)
Now \(\int \int \int 1 = \int(\ \int(\int 1)\ ) = \int_{t = 0}^{x} t^2/2 dt = x^3/3!.\)
We find \(f = f(x) = 1 + x + x^2/2! + x^3/3! + \cdots = e^x.\)
The big question is: can this be made rigorous, and if so how? Something to think about as there's no class on Wednesday
f[n_] := Integrate[1/(1+x^n), {x,0,Infinity}] If[Mod[n,2] == 1,1,2]
For[n = 2, n <= 10, n++, Print["n = ", n, " and integral is ", f[n], "."]]