General takeaways (all classes)
MATH 317: Additional comments related to material from the class. If anyone wants to convert this to a blog, let me know. These additional remarks are for your enjoyment, and will not be on homeworks or exams. These are just meant to suggest additional topics worth considering, and I am happy to discuss any of these further.
We can use truth tables to convert an IF-THEN statement to an INCLUSIVE OR. Read about Boolean algebras for more on this important topic. You've seen this at various points in your math career; sometimes it's easier to attack the contrapositive rather than the original statement.
A major theme of today's lecture was building complicated functions out of simpler ones. In the end, we could get trunctations, maximum and minimums, and absolute values from IF-THEN and other operations. This shows that we can build complicated functions out of simpler pieces. Some of these we haven't done yet but will either do on Monday, or I'll just leave to the books.
In calculus the absolute value function wreaks havoc, as it is not differentiable. In linear programming it's effect is far less, as we can essentially linearize it (at the cost of introducing binary indicator variables). This can lead to conversations on how to measure errors. The Method of Least Squares is one of my favorites in statistics (click here for the Wikipedia page, and click here for my notes). The Method of Least Squares is a great way to find best fit parameters. Given a hypothetical relationship y = a x + b, we observe values of y for different choices of x, say (x1, y1), (x2, y2), (x3, y3) and so on. We then need to find a way to quantify the error. It's natural to look at the observed value of y minus the predicted value of y; thus it is natural that the error should be Sum_{i=1 to n} h(yi - (a xi + b)) for some function h. What is a good choice? We could try h(u) = u, but this leads to sums of signed errors (positive and negative), and thus we could have many errors that are large in magnitude canceling out. The next choice is h(u) = |u|; while this is a good choice, it is not analytically tractable as the absolute value function is not differentiable. We thus use h(u) = u2; though this assigns more weight to large errors, it does lead to a differentiable function, and thus the techniques of calculus are applicable. We end up with a very nice, closed form expression for the best fit values of the parameters.
It is possible to get so caught up in reductions and compactifications that the resulting equation hides all meaning. A terrific example is the great physicist Richard Feynman's reduction of all of physics to one equation, U = 0, where U represents the unworldliness of the universe. Suffice it to say, reducing all of physics to this one equation does not make it easier to solve physics problems / understand physics (though, of course, sometimes good notation does assist us in looking at things the right way). I'm mentioning this here as what he does involves measuring errors by squaring, the topic discussed a moment ago. For each physics equation look at the square of the left hand side minus the right hand side, then sum everything and call that U. Thus one term is say (F - ma)^2, and thus we see that the only way U = 0 is if each summand is zero, and thus each physics equation must hold. Suffice it to say, reducing all of physics to this one equation does not make it easier to solve physics problems / understand physics (though, of course, sometimes good notation does assist us in looking at things the right way).
Video online here: http://youtu.be/TPDlEMpXBjg
(Repeating from last Wednesday's additional comment, as it's been awhile): The Simplex Method allows us to solve the standard linear programming problem. There were a lot of clever ideas in the proof. The first was that we used Phase II to prove Phase I and then used Phase I to prove Phase II; this seems illegal as Phase II requires Phase I, but fortunately it isn't. The idea is if we can find a solution to a related problem, we can pass from that to a solution to the problem we care about. This is somewhat similar to the auxiliary lines that appear in geometry proofs; the difficulty is figuring out where to draw them. We needed to pass from our original problem to a related one.
We talked about tic-tac-toe today as a counting problem: how many `distinct' games are there. We are willing to consider games that are the same under rotation or reflection as the same game; see http://www.btinternet.com/~se16/hgb/tictactoe.htm for a nice analysis, or see the image here for optimal strategy.
Probably the most famous movie occurrence of tic-tac-toe is from Wargames; the clip is here (the entire movie is online here, start around 1:44:17; this was a classic movie from my childhood).