Dynamic Programming to the Rescue
The basic principle behind dynamic programming
is that if your solution has a recursive structure, then you can:
- Break down the problem into smaller repeating sub-problems.
- Store the solutions of the sub-problems and use them later when needed
For example, LIS(arr, 0)
and LIS(arr, 1)
are re-calculated and required again and again. Alternatively, once computed, we can store them in an array, and use this result whenever it is needed.
Hence, now we have to account for an additional array. Let's call this array LISArr
, which stores the value of LIS(arr, n)
at index n
.
- Initialize each element of
LISArr
to zero. - If
LIS(arr, n)
is required, then check the value ofLISArr[n]
.- If LISArr[n] is zero, then compute LIS(arr,n) and store in LISArr[n]
- If LISArr[n] is greater than zero, then simply use its value from the array.
Now it is time to change the prior pseudo-code to incorporate dynamic programming. Only the LIS(arr, n)
routine needs to be changed, with one additional parameter to include the array LISArr
.
We can see that the time complexity of this pseudo-code is still quadratic, i.e., O(n^2)
. The LISDP
function runs for each item of the array and for each item, LIS
runs at the most n
times. This makes the overall complexity O(n^2)
.
xxxxxxxxxx
Routine: LISDP(arr,n,LISArr)
Input: an array of size S and index n, length array same size as arr and initialized to zero
Output: Length of longest increasing subsequence that has arr[n] as its last item
​
LISDP(arr,n,LISArr)
​
Base case:
1. if (n==0) return 1
2. if LISArr[n] != 0 return LISArr[n] //do not go in the recursive case
​
Recursive case:
1. max = 1
2. Loop from j=0..(n-1)
a. if (arr[j]<arr[n])
i. LISArr[j] = LISDP(arr,j,LISArr) //compute LIS(arr,j)
ii. if (LISArr[j] > max) then max = LISArr[j]
return max