Mark As Completed Discussion

Dynamic Programming to the Rescue

The basic principle behind dynamic programming is that if your solution has a recursive structure, then you can:

  1. Break down the problem into smaller repeating sub-problems.
  2. Store the solutions of the sub-problems and use them later when needed

For example, LIS(arr, 0) and LIS(arr, 1) are re-calculated and required again and again. Alternatively, once computed, we can store them in an array, and use this result whenever it is needed.

Hence, now we have to account for an additional array. Let's call this array LISArr, which stores the value of LIS(arr, n) at index n.

  1. Initialize each element of LISArr to zero.
  2. If LIS(arr, n) is required, then check the value of LISArr[n].
    1. If LISArr[n] is zero, then compute LIS(arr,n) and store in LISArr[n]
    2. If LISArr[n] is greater than zero, then simply use its value from the array.

Now it is time to change the prior pseudo-code to incorporate dynamic programming. Only the LIS(arr, n) routine needs to be changed, with one additional parameter to include the array LISArr.

We can see that the time complexity of this pseudo-code is still quadratic, i.e., O(n^2). The LISDP function runs for each item of the array and for each item, LIS runs at the most n times. This makes the overall complexity O(n^2).

TEXT
OUTPUT
:001 > Cmd/Ctrl-Enter to run, Cmd/Ctrl-/ to comment