The textbook does cover MIP, but there are many free (and non-free)
resources. To do the assignment reader the slides is probably enough,
but the following links are helpful.
Again the book does not cover Stochastic local search. The slides
should contain enough material to complete the assignment. but you
can look at Coursera course Solving Algorithms for Discrete
Optimisation. The
book Stochastic Local
Search
is a good general reference.
Amortised analysis can be quite difficult to understand first time
around. The slides contain a lot of material and examples, but you are
also encouraged to read Chapter 16 of the textbook. There are a few
takeaways from this lecture
Amortised analysis is the time needed (in the computational
complexity $O()$ sense) to do any sequence of $n$ operations, and
hence gives you the average time complexity of a single operation.
You can brute force the mathematics and just compute the complexity
or can use the Accounting Method or the Potential method.
Both the accounting method and the potential method give you
mathematical tricks to estimate the total computational complexity for
a sequence of $n$ operations.
In the lecture we will look at
A rather artificial example of a stack where you can remove $k$
items in one go.
A binary counter
A more useful dynamic table, that can resize itself as it gets full.
The binary counter and the stack are useful to understand how the
accounting or potential method work. The dynamic table is the
foundation of many modern data-structures.
Today we are going to look at probabilistic analysis of
algorithms and some techniques for designing algorithms that use
randomisation. Randomised algorithms is a big subject, which has many
useful and efficient algorithms. Unfortunately, we will only scratch
the surface.
We will cover
The hiring problem as a prototypical probabilistic problem
Indicator Variables
Linearity of expectation. Given two random variables $X$ and $Y$,
then the expectation $E(X + Y) = E(X) + E(Y)$. The theorem is true
for any distribution of $X$ and $Y$. This is perhaps one of the most
useful theorems in probabilistic analysis of algorithms.
Some examples of using indicator variables including the hat
problem, the birthday paradox.
Some randomised strategies for the hiring problem, or using
randomness to avoid worst case run times.
The algorithm for generating random permutations uniformly. We will
not cover the proof of its correctness.
Analysis of Hash tables including expected load factors and chain
length.
Universal Hashing. A randomised technique to build efficient hashing
functions.
Slides
The information in the slides is spread out all over the place. I plan
to use the whiteboard. You are better off looking at the textbook, but
take a look at
The slides contain a lot of material. You are strongly advised to read
the textbook.
Chapter 5 contains the relevant background on probabilistic
analysis and randomised algorithms
Chapter 11 up to and including 11.3.4 has the material on hash
tables. Some of this material should be revision.
Appendix C contains some background on probability that you might
need to refer to while reading the other material.
Lecture 6
Today we are going to look at polynomial time reductions between
problems. We will use some of the slides from
here. Exactly
what we cover depends on how much time we have and what the interest
of the class is. You are advised to read all of the slides and the
corresponding material in the book. Chapter 34 is essential reading
for both today’s lecture and Lecture 9.
There are two motivations for studying polynomial time reductions:
If I have an algorithm for one problem, then I can use it to solve
other problems that on the surface seem different.
To study the computational complexity of polynomial and
non-deterministic problems. If I know that my problem is NP-hard, then
I am very unlikely to find a polynomial time algorithm.
Today we will be thinking more about the first case, and in Lecture 9
we will look more at the landscape of computational complexity.
Lecture 7
Slides and Topics
Today is a flipped lecture on SAT. This lecture will
prepare you for problem 3 of assignment 2.
Watch the following videos. The cover slides 1-29 of
SAT-intro
and slides 1-4 of
SAT-CDCL.