For example, 314 would be a

Well, we might try looking at other ways of writing our number. Often there are different forms that are equivalent, but bring out different properties. For digits, we recall this comes from powers of 10: our number 314 can be written as

So, notice what happens if we subtract from 314 the sum of its digits:

314 - (3+1+4) = (3*100 - 3) + (1*10 - 1) + (4*1 - 4)

314 - (3+1+4) = (3)*99 + (1)*9 + (4)*0

Ah. Notice that the right hand side is clearly divisibly by 3, as each term is multiplied by 0 or 9 or 99. If (3+1+4) is divisible by 3, we bring it over to the other side, and get 314 equals number divisible by three! If (3+1+4) is not divisible by three, when we bring it over we get 314 does not equal a number NOT divisible by three!

Now, we've done this proof in the special case when our number is 314. There is nothing wrong with first proving something for a specific case or number of function, as long as we then generalize. We see that the exact same proof would carry through if instead we considered the number:

Theorem: Let f(x) be a continuous function on the Real Line. If the integral of f(x) vanishes for EVERY interval [a,b] with a<b then f(x) is identically zero.

If we try to prove this directly we might run into some trouble, for we are given information on f(x) over intervals, but must prove something over a point. What if, perhaps, we try to prove by contradiction? We assume for the sake of argument that the result is false: all the hypotheses hold, but there is a counterexample, say f, that is NOT zero everywhere. Now we have something to work with, and we try to show that if such a function existed, then it is not possible to satisfy all of our hypotheses. This contradiction means that our initial assumption that there was a counterexample IS FALSE, and thus the theorem does hold.

Let us try this in this case. So we have a continuous function which integrates to zero over any interval, but is not identically zero. So there is some point, say x

Well, let us glean all the information we can out of our hypotheses on f. We assume f is continuous. So, if we choose e, then we know there is a d such that, if |x-x

But, we get freedom in choosing epsilon! We know that our f must integrate to zero over any interval, so we have Integral{ f(x), [x0-d, x0+d] } = 0. But we have f(x

Now we can get a contradiction. As f(x) > f(x

where the first equality follows from our assumption that f integrates to zero
over any interval.
But f(x_{0})*d > 0, and we've reached a contradiction!
Basically the above is just a rigorous way of saying that if a continuous
function is positive at some point, it is positive in a neighborhood of the
point and thus cannot integrate to zero there.

To prove something by induction, there are two things you must do:

First: show it is true for n = 0 (or n = 1 if you just want to prove it from 1 on). This is called the BASIS STEP of the proof. You then get to assume the result is true for n then it is true for n+1. This is called the INDUCTIVE STEP. If you can do this, you are done!

Why? Well, setting n=0 yields it true for n=1. Then setting n=1 yields it is true for n=2, and so on and so on. Alternatively, we may explain this as follows. Let P(n) mean that the result is true for n. We have P(0) and P(n) implies P(n+1). Thus taking n=0 gives us two statements: P(0) and P(0) true implies P(1) is true. By the laws of logic, we can now conclude P(1) is true. Now we take n=1 and we have two statements: P(1) and P(1) true implies P(2) is true. Thus P(2) is true, and so on.

Theorem: 1 + 2 + ... + n = n*(n+1)/2.

Proof By Induction:

Basis Step: When n = 1, we get 1 = 1*2/2 = 1

Inductive Step: Assume true for some n, say n = k.

Now we must show it is true for n = k+1. So, let us examine 1 + 2 + ... + k + (k+1).

By induction, we know what the sum of the first k terms are, so we can write:

1 + ... + (k+1) = k*(k+1)/2 + (k+1) = (k+1) * { k/2 + 1} = (k+1) * { (k+2) / 2 } = (k+1)(k+1 + 1)/2But this is what we were trying to show! We've now completed the proof!

Note: there is an equivalent form of Induction that is often easier to use: Instead of assuming that it is true just for n = k and then trying to show that it must be true for n = k+1, one assumes it is true for ALL integers j ≤ k, and then show that it holds for k+1. We leave it to the interested reader to prove that these are equivalent when using Induction on integers.

Theorem: |f(x) + g(x)| <= |f(x)| + |g(x)| for f, g real valued functions.

Case 1: f(x), g(x) >= 0.

Then |f(x) + g(x)| = f(x) + g(x) = |f(x)| + |g(x)|

Case 2: f(x) >= 0, g(x) < 0

We want to somehow get f(x) + g(x). We can add them together, and get f(x) + g(x) < 0 + f(x) = |f(x)|, but when we take absolute values of both sides, the inequality could change. So, a standard trick is to break this case into subcases:

Subcase A: 0 <= f(x) + g(x) Then as g(x) < 0, f(x) + g(x) < f(x) So 0 <= |f(x) + g(x)| < f(x) <= |f(x)| + |g(x)| Subcase B: f(x) + g(x) < 0 Then 0 < -1{f(x) + g(x)} <= -g(x) as f(x) >= 0 So 0 < |f(x) + g(x)| <= |g(x)| <= |f(x)| + |g(x)|

This is proved identically as in Case 2, save for the differences, namely just switch the roles of f and g.

Case 4: f(x) < 0, g(x) < 0

This is proved almost identically as in Case 1.

Notice the difference between this and the Incorrect 'Let's Prove by Example'. Just because it holds for the certain values you check, doesn't mean it holds everywhere, but if it doesn't hold somewhere, then it doesn't hold everywhere. See also the arguments for Proofs by Example.

As a specific example, consider f(x) = x

Let f(n) = Sum_{k = 0 to n} k^{2}. Prove f(n) is a quadratic polynomial, i.e. that f(n) = a n^{2} + bn + c for some a, b and c.