Math 398: Spring 2014: Independent Study in Operations Research  / Linear Programming

Videos of lectures:

Click here for videos from the version taught in 2012 at Mount Holyoke College (videos are below the new lectures)

 

The purpose of the paragraphs below is to explain the rational for the course and the requirements. While the goal is to have this course cross listed across departments, due to the multidisciplinarity of the subject, it will be housed in Mathematics. I have chosen to make it a pre-core class as there is a curricular need for classes at this level, and the material can be covered in great depth without assuming more advanced courses. The latter allows a wider audience of students access to the material, and can serve as a motivation to explore these other subjects in greater detail. Many of the most important algorithms in CS and mathematics are very elementary to state and analyze (though discovering them was quite difficult). For example, Strassen’s algorithm revolutionized computations with matrices, yet if a professor felt so inclined could be included in a standard linear algebra class. Similarly the proof of the simplex method mostly requires simple results from linear algebra and only elementary convergence facts from real analysis; rather than requiring real as a pre-requisite and limiting the reach of the course, we have decided to open the course up to all and quickly teach the needed limiting results.

Units:

  1. Ancient / Classical algorithms: These are warm-ups meant to show students how to do common items faster (i.e., things  they’ve seen many times). Examples include the following.

    1. Babylonian multiplication: base 60 is a pain; instead of learning the 60x60 table for xy, they noted xy = ((x+y)2  - x2 - y2) / 2, reducing to just needing to know squares, subtraction and division by 2. This leads to the concept of a look-up table and interpolation, and is a great opening to talk about classical tables (for special functions), and how modern appliances (such as cell phones) work with limiting processing power.

    2. Horner’s  algorithm: fast polynomial evaluation by cleverly grouping parentheses. We will discuss why such efficiencies are needed. One project will be for students to write code to iterate polynomials and create fractal sets, with and without Horner’s algorithm, noticing the sizable decrease in run-time (or at least I hope they will see this – I saw that in the 90s when I wrote such code!).

    3. Fast exponentiation: The method of repeated squaring to evaluate multiplication quickly. This will lead into discussions of the advantages of different bases and applications in cryptography, and issues in increased storage in some implementations (and ways to minimize that). Student will write code to perform these computations.

    4. Strassen algorithm: Will discuss how to obtain a power improvement and what this means, fits nicely with the Horner algorithm discussion. Will discuss real world implications of being able to do matrix multiplication faster. Will talk about how things are easier to program when the matrix size is a power of 2, and what that means in determining how data is collected. If time permits we will discuss algorithms involving sparse matrices.

     

  2.  Efficiency Revisited: These are some important problems where students may not have had as much experience. The point is to look at a variety of problems where standard definitions or brute force are too slow.

  1.  Linear Programming: The highlight of the course is a detailed analysis of linear programming. We will discuss the history of the subject, from original papers that claimed to be of theoretical interest but no practical value as there would never be any way to do so many computations, to its recent successes and challenges. Students will be required to write or use code to solve a linear programming problem. Notes of mine: http://web.williams.edu/Mathematics/sjmiller/public_html/416/currentnotes/AdvLinProgBook.pdf