There are two main types of proofs: correct, and incorrect. We'll try to give a few examples of each here. Of correct proofs, often one can prove the same theorem a variety of different ways. Some ways are often easier than others. Below we'll discuss some favorites: brute force, contradiction, induction, divide and counter, counterexample. These notes were written in 1996-1997 as general review notes for undergraduate classes at Princeton (from calculus to linear algebra). In many of the proofs below we see the assumptions guide the proof. In general, one of two things happen when we remove a hypothesis: either the theorem is now false, or the proof is more difficult (for example, the proof of the Fundamental Theorem of Calculus is a lot simpler if you assume that f'(x) is continuous and bounded). A good way to check your mastery of a theorem is to remove just one hypothesis and see if you can find a counter-example.


(I). False Proof: Proof By Example:

Here is an example of what NOT to do. Just because something works for everything you test it on, does not mean it is true. You can prove something is false by giving a counterexample: this is okay and is often easiest (see below). But just because something works every time you check it does not mean it is always true. For example: 16/64 = 1/4 in lowest terms; 19/95 = 1/5 in lowest terms. Why? Well, we notice that whenever you have the ratio of two digit numbers, if the same number appears diagonally you can just cancel. So 12/24 = 1/4.

Or we may remember the rule for divisibility by 3: if the sum of the digits of your number is divisible by three, then so is your number. We check this with 231 (yes), 9444 (yes), and 1717 (no). Now, while the rule is true, this does not constitute a proof, for we haven't checked EVERY number, only three specific numbers. We would have to show that, given an arbitrary number with digits an...a3a2a1a0, then if a0 + a1 + ... + an is divisible by 3, so is an...a3a2a1a0. This leads us to:

(II). Correct Proof: Brute Force:

Often the way the theorem is stated, it tries to guide you as to what to do. For instance, in the theorem we are trying to prove on divisibility by three (see above) it involves looking at the sum of the digits of our number. So, in a brute force proof, we ask ourselves:  how can we get the sum of the digits, given the number an...a3a2a1a0?

For example, 314 would be
a2a1a0, with a2 = 3, a1 = 1, a0 = 4, and sum of digits = (3+1+4)

Well, we might try looking at other ways of writing our number. Often there are different forms that are equivalent, but bring out different properties. For digits, we recall this comes from powers of 10: our number 314 can be written as
314 = 3*100 + 1*10 + 4*1

So, notice what happens if we subtract from 314 the sum of its digits:
314 - (3+1+4)   =   3*100 + 1*10 + 4*1 - (3+1+4)        
314 - (3+1+4)   =   (3*100 - 3) + (1*10 - 1) + (4*1 - 4)
314 - (3+1+4)   =   (3)*99 + (1)*9 + (4)*0                     

Ah. Notice that the right hand side is clearly divisibly by 3, as each term is multiplied by 0 or 9 or 99. If (3+1+4) is divisible by 3, we bring it over to the other side, and get 314 equals number divisible by three! If (3+1+4) is not divisible by three, when we bring it over we get 314 does not equal a number NOT divisible by three!

Now, we've done this proof in the special case when our number is 314. There is nothing wrong with first proving something for a specific case or number of function, as long as we then generalize. We see that the exact same proof would carry through if instead we considered the number:
 an...a3a2a1a0  =  an*10n + ... + a1*101 + a0*100



(III). Correct Proof: Proof By Contradiction

This is one of my favorite ways of proving statements. Sometimes, instead of trying to directly show that something is true, it is easier to assume it fails, and go for a contradiction. Let us look at an example, and as I am too lazy to set up a pretty tex document and just want to type direct, we will define:
Integral{ f(x), [a, b] } to be the integral of f(x)  from a to b.

Theorem: Let f(x) be a continuous function on the Real Line. If the integral of f(x) vanishes for EVERY interval [a,b] with a<b then f(x) is identically zero.

If we try to prove this directly we might run into some trouble, for we are given information on f(x) over intervals, but must prove something over a point. What if, perhaps, we try to prove by contradiction? We assume for the sake of argument that the result is false: all the hypotheses hold, but there is a counterexample, say f, that is NOT zero everywhere. Now we have something to work with, and we try to show that if such a function existed, then it is not possible to satisfy all of our hypotheses. This contradiction means that our initial assumption that there was a counterexample IS FALSE, and thus the theorem does hold.

Let us try this in this case. So we have a continuous function which integrates to zero over any interval, but is not identically zero. So there is some point, say x0, where the function is not zero. Without loss of generality, let us assume our function is positive (a similar proof works for f(x0) < 0).

Well, let us glean all the information we can out of our hypotheses on f. We assume f is continuous. So, if we choose e, then we know there is a d such that, if |x-x0| < d, we have |f(x)-f(x0)| < e.

But, we get freedom in choosing epsilon! We know that our f must integrate to zero over any interval, so we have Integral{ f(x), [x0-d, x0+d] } = 0. But we have f(x0) is positive! If epsilon is sufficiently small, by continuity f(x) will be positive around x0. For example: taking e < f(x0)/2, we get there is a d such that f(x) > f(x0)/2 > 0.

Now we can get a contradiction. As f(x) > f(x0)/2 on this interval, standard results of calculus give us:
0 = Integral{f(x), [x0-d,x0+d] } < Integral{f(x0)/2, [x0-d,x0+d] } = f(x0)*d,

where the first equality follows from our assumption that f integrates to zero over any interval. But f(x0)*d > 0, and we've reached a contradiction! Basically the above is just a rigorous way of saying that if a continuous function is positive at some point, it is positive in a neighborhood of the point and thus cannot integrate to zero there.



(IV). Correct Proof: Proof By Induction

Proofs by induction are nice, and good to use when you are trying to show something is true for all integers. These are often mechanical proofs, with well defined guidelines, so you know what to do.

To prove something by induction, there are two things you must do:
First: show it is true for n = 0 (or n = 1 if you just want to prove it from 1 on). This is called the BASIS STEP of the proof. You then get to assume the result is true for n then it is true for n+1. This is called the INDUCTIVE STEP. If you can do this, you are done!

Why? Well, setting n=0 yields it true for n=1. Then setting n=1 yields it is true for n=2, and so on and so on. Alternatively, we may explain this as follows. Let P(n) mean that the result is true for n. We have P(0) and P(n) implies P(n+1). Thus taking n=0 gives us two statements: P(0) and P(0) true implies P(1) is true. By the laws of logic, we can now conclude P(1) is true. Now we take n=1 and we have two statements: P(1) and P(1) true implies P(2) is true. Thus P(2) is true, and so on.

Let's do an example now:

Theorem: 1 + 2 + ... + n = n*(n+1)/2.
Proof By Induction:
Basis Step: When n = 1, we get 1 = 1*2/2 = 1
Inductive Step: Assume true for some n, say n = k.

Now we must show it is true for n = k+1. So, let us examine 1 + 2 + ... + k + (k+1).
By induction, we know what the sum of the first k terms are, so we can write:
1 + ... + (k+1) = k*(k+1)/2  +  (k+1) 
                 =  (k+1) * { k/2   +  1}
                 =  (k+1) * { (k+2) / 2 }
		 =  (k+1)(k+1 + 1)/2  
But this is what we were trying to show! We've now completed the proof!
 Note: there is an equivalent form of Induction that is often easier to use: Instead of assuming that it is true just for n = k and then trying to show that it must be true for n = k+1, one assumes it is true for ALL integers j ≤ k, and then show that it holds for k+1. We leave it to the interested reader to prove that these are equivalent when using Induction on integers.



(V). Correct Proof: Divide and Conquer

The more assumptions and hypotheses we have on objects, the more (detailed) theorems and results we should know about them. Often it helps in proving theorems to break the proof up into several cases, covering all possibilities. We call this method Divide and Conquer. It is ESSENTIAL in using this method that you cover all cases: that you make sure you consider all possibilities. For example, you might do: Case 1: the function is continuous. Now you have all the theorems about continuity at your disposal. And then Case 2: the function is not continuous. So now you have a special point where the function is discontinuous, and theorems and results about such points. The advantage is that, before, you could not use either set of results. The disadvantage is that it is now two proofs that you must give, but you get more to work with.

Theorem: |f(x) + g(x)| <= |f(x)| + |g(x)| for f, g real valued functions.
Case 1: f(x), g(x) >= 0.
Then |f(x) + g(x)| = f(x) + g(x) = |f(x)| + |g(x)|

Case 2: f(x) >= 0, g(x) < 0
We want to somehow get f(x) + g(x). We can add them together, and get f(x) + g(x) < 0 + f(x) = |f(x)|, but when we take absolute values of both sides, the inequality could change. So, a standard trick is to break this case into subcases:

          Subcase A: 0 <= f(x) + g(x)
          Then as g(x) < 0, f(x) + g(x) < f(x)
	  So 0 <= |f(x) + g(x)| < f(x) <= |f(x)| + |g(x)|

	  Subcase B: f(x) + g(x) < 0
	  Then 0 < -1{f(x) + g(x)} <= -g(x)  as  f(x) >= 0
          So 0 < |f(x) + g(x)| <= |g(x)| <= |f(x)| + |g(x)|

Case 3: f(x) < 0, g(x) >= 0
This is proved identically as in Case 2, save for the differences, namely just switch the roles of f and g.

Case 4: f(x) < 0, g(x) < 0
This is proved almost identically as in Case 1.




(VI). Correct Proof: Proof By Counterexample

This is FUN!! And often easy! Say you are trying to prove that something IS NOT TRUE! For example, say you are trying to show that not every continuous function is differentiable everywhere. Well, it doesn't matter what hat you pull it out of, if you can give me (or anyone) a continuous function that is not differentiable at a point, then YOU ARE DONE! (the interested reader should look at the function |x| at x=0 now).

Notice the difference between this and the Incorrect 'Let's Prove by Example'. Just because it holds for the certain values you check, doesn't mean it holds everywhere, but if it doesn't hold somewhere, then it doesn't hold everywhere. See also the arguments for Proofs by Example.

As a specific example, consider f(x) = x2 and the claim that f(x) = 4 for all real x. Well, f(2) = 4, and f(-2) = 4, so x2 = 4 for all x. Wrong! Incorrect Proof by Example. But f(1) = 1, so x2 does not equal 4 for all x.

 

(VII). Test your proving abilities

Consider the polynomial f(n) = n2 + n + 41. Note f(0) = 41, f(1) = 43, f(2) = 47, f(3) = 53 and so on. Interestingly, these are prime numbers! Is f(n) prime for any non-negative integer n?

Let f(n) = Sum_{k = 0 to n} k2. Prove f(n) is a quadratic polynomial, i.e. that f(n) = a n2 + bn + c for some a, b and c.

 


Return to Homepage