(III). Correct Proof: Proof By Contradiction
This is one of my favorite ways of proving statements. Sometimes, instead of trying
to directly show that something is true, it is easier to assume it fails, and go
for a contradiction. Let us look at an example, and as I am too lazy to set up a
pretty tex document and just want to type direct, we will define:
Integral{ f(x), [a, b] } to be the integral of f(x) from a to b.
Theorem: Let f(x) be a continuous function on the Real Line. If the integral of f(x)
vanishes for EVERY interval [a,b] with a<b then f(x) is identically zero.
If we try to prove this directly we might run into some trouble, for we are given
information on f(x) over intervals, but must prove something over a point. What
if, perhaps, we try to prove by contradiction? We assume for the sake of
argument that the result is false: all the hypotheses hold, but there is a
counterexample, say f, that is NOT zero everywhere. Now we have something to work
with, and we try to show that if such a function existed, then it is not possible
to satisfy all of our hypotheses. This contradiction means that our initial
assumption that there was a counterexample IS FALSE, and thus the theorem does hold.
Let us try this in this case. So we have a continuous function which integrates to
zero over any interval, but is not identically zero. So there is some point, say
x0, where the function is not zero. Without loss of generality, let us assume our
function is positive (a similar proof works for f(x0) < 0).
Well, let us glean all the information we can out of our hypotheses on f. We assume
f is continuous. So, if we choose e, then we know there is a
d such that,
if |x-x0| <
d, we have |f(x)-f(x0)| <
e.
But, we get freedom in choosing epsilon! We know that our f must integrate to zero
over any interval, so we have Integral{ f(x), [x0-d, x0+d] } = 0. But we have
f(x0) is positive! If epsilon is sufficiently small, by continuity f(x) will be
positive around x0. For example: taking
e < f(x0)/2, we get there is a
d
such that f(x) > f(x0)/2 > 0.
Now we can get a contradiction. As f(x) > f(x0)/2 on this interval, standard results
of calculus give us:
0 = Integral{f(x), [x0-d,x0+d] } < Integral{f(x0)/2, [x0-d,x0+d] } = f(x0)*d,
where the first equality follows from our assumption that f integrates to zero
over any interval.
But f(x0)*d > 0, and we've reached a contradiction!
Basically the above is just a rigorous way of saying that if a continuous
function is positive at some point, it is positive in a neighborhood of the
point and thus cannot integrate to zero there.
(IV). Correct Proof: Proof By Induction
Proofs by induction are nice, and good to use when you are trying to show something
is true for all integers. These are often mechanical proofs, with well defined
guidelines, so you know what to do.
To prove something by induction, there are two things you must do:
First: show it is true for n = 0 (or n = 1 if you just want to prove it from 1 on).
This is called the BASIS STEP of the proof. You then get to assume the result is
true for n then
it is true for n+1. This is called the INDUCTIVE STEP. If you can do this, you are done!
Why? Well, setting n=0 yields it true for n=1. Then setting n=1 yields it is true
for n=2, and so on and so on. Alternatively, we may explain this as follows. Let
P(n) mean that the result is true for n. We have P(0) and P(n) implies P(n+1).
Thus taking n=0 gives us two statements: P(0) and P(0) true implies P(1) is
true. By the laws of logic, we can now conclude P(1) is true. Now we take n=1
and we have two statements: P(1) and P(1) true implies P(2) is true. Thus P(2)
is true, and so on.
Let's do an example now:
Theorem: 1 + 2 + ... + n = n*(n+1)/2.
Proof By Induction:
Basis Step: When n = 1, we get 1 = 1*2/2 = 1
Inductive Step: Assume true for some n, say n = k.
Now we must show it is true for n = k+1.
So, let us examine 1 + 2 + ... + k + (k+1).
By induction, we know what the sum of the first k terms are, so we can write:
1 + ... + (k+1) = k*(k+1)/2 + (k+1)
= (k+1) * { k/2 + 1}
= (k+1) * { (k+2) / 2 }
= (k+1)(k+1 + 1)/2
But this is what we were trying to show! We've now completed the proof!
Note: there is an equivalent form of Induction that is often easier to use:
Instead of assuming that it is true just for n = k and then trying to show
that it must be true for n = k+1, one assumes it is true for ALL integers
j ≤ k, and then show that it holds for k+1. We leave it to the interested
reader to prove that these are equivalent when using Induction on integers.
(V). Correct Proof: Divide and Conquer
The more assumptions and hypotheses we have on objects, the more (detailed)
theorems and results we should know about them. Often it helps in proving
theorems to break the proof up into several cases, covering all possibilities.
We call this method Divide and Conquer. It is ESSENTIAL in using this method that
you cover all cases: that you make sure you consider all possibilities. For example,
you might do: Case 1: the function is continuous. Now you have all the theorems
about continuity at your disposal. And then Case 2: the function is not continuous. So
now you have a special point where the function is discontinuous, and theorems and
results about such points. The advantage is that, before, you could not use either
set of results. The disadvantage is that it is now two proofs that you must give, but
you get more to work with.
Theorem: |f(x) + g(x)| <= |f(x)| + |g(x)| for f, g real valued functions.
Case 1: f(x), g(x) >= 0.
Then |f(x) + g(x)| = f(x) + g(x) = |f(x)| + |g(x)|
Case 2: f(x) >= 0, g(x) < 0
We want to somehow get f(x) + g(x). We can
add them together, and get f(x) + g(x) < 0 + f(x) = |f(x)|, but when we take absolute
values of both sides, the inequality could change. So, a standard trick is to break
this case into subcases:
Subcase A: 0 <= f(x) + g(x)
Then as g(x) < 0, f(x) + g(x) < f(x)
So 0 <= |f(x) + g(x)| < f(x) <= |f(x)| + |g(x)|
Subcase B: f(x) + g(x) < 0
Then 0 < -1{f(x) + g(x)} <= -g(x) as f(x) >= 0
So 0 < |f(x) + g(x)| <= |g(x)| <= |f(x)| + |g(x)|
Case 3: f(x) < 0, g(x) >= 0
This is proved identically as in Case 2, save for the differences, namely just
switch the roles of f and g.
Case 4: f(x) < 0, g(x) < 0
This is proved almost identically as in Case 1.
(VI). Correct Proof: Proof By Counterexample
This is FUN!! And often easy! Say you are trying to prove that something IS NOT
TRUE! For example, say you are trying to show that not every continuous function is
differentiable everywhere. Well, it doesn't matter what hat you pull it out of, if
you can give me (or anyone) a continuous function that is not differentiable at a
point, then YOU ARE DONE! (the interested reader should look at the function |x| at
x=0 now).
Notice the difference between this and the Incorrect 'Let's Prove by Example'. Just because it holds for the certain values you check, doesn't mean it holds everywhere, but if it doesn't hold somewhere, then it doesn't hold everywhere. See also the arguments for Proofs by Example.
As a specific example, consider f(x) = x2 and the claim that f(x) = 4 for all real x. Well, f(2) = 4, and f(-2) = 4, so x2 = 4 for all
x. Wrong! Incorrect Proof by Example. But f(1) = 1, so x2 does not equal 4 for all x.
(VII). Test your proving abilities
Consider the polynomial f(n) = n2 + n + 41. Note f(0) = 41, f(1) = 43, f(2) = 47, f(3) = 53 and so on. Interestingly, these are prime numbers! Is f(n) prime for any non-negative integer n?
Let f(n) = Sum_{k = 0 to n} k2. Prove f(n) is a quadratic polynomial, i.e. that f(n) = a n2 + bn + c for some a, b and c.
Return to Homepage