Competitive Programming in Haskell: two more DP challenges

Continuing the series on dynamic programming, I just have a couple challenge problems for you today. I have indeed solved both of these problems in Haskell, but I don’t yet know how to write elegant solutions! There is a reason that the techniques covered in my previous posts aren’t quite good enough.

Feel free to discuss in the comments! I’m hoping that I can learn some new approaches from some of my readers. I will probably post some hints in the comments towards the right recurrences, so don’t look at the comments if you don’t want any spoilers.

About Brent

Associate Professor of Computer Science at Hendrix College. Functional programmer, mathematician, teacher, pianist, follower of Jesus.
This entry was posted in competitive programming, haskell and tagged , , , . Bookmark the permalink.

7 Responses to Competitive Programming in Haskell: two more DP challenges

  1. Joeri says:

    Ooh, nice. I’ve only looked at Honi thus far, and I must admit to not being too familiar with the libraries that are available in kattis. (I also don’t know how to do code formatting on this blog… are my content blockers preventing something from loading that adds style buttons, or do I really only get a text field?). Anyway, an attempt:

    For difficulty `i`, we have `M_i = X_i + Y_i + Z_i` possible tasks: `X_i` tasks that can only have difficulty `i`, `Y_i` tasks that can have difficulty `i` or `i+1`, and `Z_i` tasks that can have difficulty `i-1` or `i`. The first insight is that if a task has been chosen for all difficulties less than `i`, then when picking a task for difficulty `i` any unused tasks of the `Z_i` variety can only still be used on `i`, so we may consider them now to be tasks of the `X_i` variety. Also the tasks of `Z_i` are exactly the tasks of `Y_(i-1)`.

    Naive mode, taking two lists, one of size N (tasks of the `X_i` variety), one of size N-1 (tasks of the `Y_i` variety):
    “`
    f :: [Integer] -> [Integer] -> Integer
    — Any patterns not matched would violate the fact that input lists are size N and N-1
    f [x] [] = x
    f (x1:x2:xs) (y1:ys) = x1 * f (x2 + y1:xs) ys
    + y1 * f (x2 + y1 – 1:xs) ys
    “`
    The calculations can be modulo’d afterwards, or on the spot with a wrapper around `Integer` that does all the required modulo stuff. Whatever. I don’t care about that for now, as there are ways of dealing with that, and it would clutter up the code while I’m just trying to get it conceptually right.

    The first term of the sum picks a task from `X_i`, and since no tasks of `Y_i` were used, moves those to `X_(i+1)`. The second term picks a task from `Y_i`, which leaves `Y_i – 1` tasks to move to `X_(i+1)`.

    The tricky part is how `X_i` is no longer just the initial list of `N` integers indexed at `i`, but there’s stuff added to it as well. This brings us to the second insight: `X_i` can only ever be two values. `N_i + Y_(i-1)` or `N_i + Y_(i-1) – 1`, with `N_i` being the initial list of `N` integers indexed at `i`. I consider `Y_0 = 0`, and I’ll start my indexing at `i = 1`.

    This leads to a new recurrence, this time actually using the index instead of the input values. That means things can be floated up one level:
    “`
    — the value n is the N from the problem statement.
    — ns is the input list of N integers
    — ys is the input list of N-1 integers
    f :: Bool -> Int -> Int
    f _ i | i > n = 1
    f pickedYLast i = x * f False (i+1) + y * f True (i+1)
    where x = ns ! i + ys ! (i-1) – if pickedYLast then 1 else 0
    y = ys ! i
    “`

    Call it with `f False 1` to get the show on the road.

    The `Bool` parameter is `True` when for `i-1`, we picked a task of the `Y_(i-1)` variety, and `False` when we picked a task of the `X_(i-1)` variety instead. This previous choice influences the amount of tasks of the `X_i` variety as described earlier.

    This results in a function that is nice to use with the usual dynamic programming tricks, needing a table (or other storage structure) of 2*N entries.

    Will look at Assassins later.

  2. Joeri says:

    Hrm. That didn’t render the way I liked it to. Oh, well. I managed to put it into kattis (making use of your modulo and memo snippets from earlier posts), and it runs at 0.88s. I managed to rephrase the problem as a fold instead of a dynamic programming problem, and that got down to 0.58. I see a Haskell solution in the 0.10-0.15 range in the stats, so I’m wondering how that was accomplished. Was that you?

    As for Assassins, best I can think of is tabulating all possible assassin-configurations across all attempts, for a state space of 1000*2^15 ~ 33 million entries, with a single entry being 15 probabilities of being alive. Not completely unreasonable, and with some tight data packing that should fit into the 1024 MB of memory we get, but it feels like there must be some sneaky shortcuts I’m missing…

    • Brent says:

      Yes, my solution to Honi runs in 0.11s. I think I did it similarly to you, so without seeing your code I’m not sure what the difference is. I used Int (not Integer), a few strictness annotations, and stored the input values in UArrays. Beyond that I don’t know of anything that would make a big difference in terms of speed.

  3. Joeri says:

    Sorry for the commentspam, I’m just having so much fun with these. Just solved Assassins as well in a nice 0.35s. Didn’t feel like a DP problem either, but they both follow the same pattern. Maybe I should check up on my understanding of what makes a DP problem! The pattern is a route through a(n implicit) state machine, where we are interested in some property of the final state, after a series of events. Pretty much `solve = extract . foldl’ step initialState $ events`.

    For Honi, states are tuples of “combinations where the most recently picked task was/wasn’t vague-difficulty between the last difficulty and the upcoming difficulty”, and adding a new difficulty induces a state change. The property we’re interested in is a projection of “combinations where the most recently picked task wasn’t vague”, as there are no tasks with vague-difficulty `N,N+1`.

    For Assassins, a state is a probability distribution over all possible assassin-configurations (who is alive, who isn’t), and each assassination attempt induces a state change. The property we’re interested in at the end is per assassin, the sum of the probabilities of the configurations where that assassin is still alive.

    Both problems can even be worked into “state = some vector; event = state transition = square matrix multiplication”. Extracting the final answer is also another matrix multiplication. However, I still find it hard to interpret matrix code, keep straight what the meaning of rows, columns, and multiplication is in the actual problem domain, rather than just as matrices. For Honi, it’s not too bad, as the transition matrices are 2×2, but Assassins would work with potentially 2¹⁵x2¹⁵ matrices that are really messy, and ugh.

    • Brent says:

      Nice work! If you thought of them first in terms of a state transition function then you did well. If you think of them first in terms of a recurrence with two parameters (e.g. the time step and the live assassins) then you could try making a 2D table and filling it in; the problem is that it is too large to fit in the allowed memory all at once. You have to realize that each row of the table (or column, depending which way you have it) depends only on the previous one, so there’s no need to store the entire table, you can get away with just computing one row at a time.

    • Brent says:

      A 2^15 x 2^15 matrix would be much too large. But there are some nice problems where thinking in terms of a transition matrix is fruitful, since you can compute the result of n transition steps in only O(log n) matrix multiplications, using exponentiation by repeated squaring. Maybe I should post some of those next. =)

  4. Soumik Sarkar says:

    For Honi I used the state (index :: Int, shared task consumed by index + 1 :: Bool). I did not have trouble using an Array (Int, Bool) Int for this, which ran in 0.14s. This problem can also be solved in O(1) space, since you only need the dp values of the previous index.

    Assassins was more interesting because the state is (assassination index :: Int, set of alive assassins :: Int (bitmask)), and the formulaic lazy Array (Int, Int) Double for m*2^n size is just too inefficient. Since there are no dependencies between values with the same assassination index, an Array Int (UArray Int Double) can be used, and I found that it fits in 0.85s. Like Honi, values at one assassination index only depends on the previous, so an improvement on this is to not store everything but generate each UArray from the previous. And finally, if we want to be the fastest possible, a single mutable array is all we need. Every assassination changes half the elements. This runs very fast in 0.07s.

    Which implementations are more elegant? I’ll let you decide :)
    I’m also curious about your solution, since your runtime is not close to any of mine. Looking forward to seeing it.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.