*This lesson was originally featured at Solving the Target Sum problem with dynamic programming and more
. It has been republished here with permission from Fabian Terh*

Previously, I wrote about solving the 0--1 Knapsack Problem using dynamic programming. Today, I want to discuss a similar problem: the Target Sum problem (link to LeetCode problem --- read this before continuing the rest of this article!).

The Target Sum problem is similar to the 0--1 Knapsack Problem, but with one key difference: instead of deciding whether to include an item or not, we now must decide whether to add or subtract an item. This doesn't seem like a significant difference, but it actually makes building our dynamic programming table a lot trickier.

I'll first briefly cover several more inefficient recursive solutions, before turning to the dynamic programming solutions in greater detail.

The most straightforward (and least efficient) solution is to explore every possibility using a backtracking algorithm.

```
public class Solution {
public int findTargetSumWays(int[] nums, int s) {
return backtrack(nums, 0, 0, s);
}
private int backtrack(int[] nums, int index, int sum, int s) {
if (index == nums.length) {
return sum == s ? 1 : 0;
}
return backtrack(nums, index + 1, sum + nums[index], s)
+ backtrack(nums, index + 1, sum - nums[index], s);
}
}
```

377ms. There's definitely room for improvement!

Notice how in the previous solution we end up re-computing the solutions to sub-problems. We could optimize this by caching our solutions using memoization.

Notice how the twist in the problem --- forcing us to either add or subtract a number --- has already complicated the solution. We are forced to either use an offset of *+1000* to the array index (question stipulates that the sum of elements in the array ≤ 1000) to account for negative values (since we can't have negative indices in an array), or use a HashMap (Java)/dictionary (Python) instead.

I went with the former, because the overhead of using a Hashmap significantly increased the runtime (you can try both and compare!).

```
public class Solution {
public int findTargetSumWays(int[] nums, int s) {
int[][] memo = new int[nums.length + 1][2001];
for (int[] row: memo) {
Arrays.fill(row, Integer.MIN_VALUE);
}
return backtrack(memo, nums, 0, 0, s);
}
private int backtrack(int[][] memo,
int[] nums,
int index,
int sum,
int s) {
if (index == nums.length) {
return sum == s ? 1 : 0;
}
if (memo[index][sum + 1000] != Integer.MIN_VALUE) {
return memo[index][sum + 1000];
}
int ans =
backtrack(memo, nums, index + 1, sum + nums[index], s) +
backtrack(memo, nums, index + 1, sum - nums[index], s);
memo[index][sum + 1000] = ans;
return ans;
}
}
```

68.82th percentile. Not great, not terrible either.

Finally, we turn to the dynamic programming solutions.

As with all dynamic programming solutions, we solve for the base case first, and reuse our previous solutions to smaller sub-problems to solve larger ones.

Our base case: when we have no values to add/subtract, we have a sum of 0, and therefore there is only one way to arrive at `S = 0`

.

The key idea in this dynamic programming solution is to only branch out from *reachable* target sums.

At the *first iteration* (i.e. the outer for-loop), assume that we are on value *x *in our `nums`

array. Therefore, we know intuitively that there is only one way to reach the target sum of *+x *and *-x. *(Explanation: you start with 0, and you either +*x* or -*x*.) Programmatically, we can express this logic by setting `ways to reach x = ways to reach 0`

and `ways to reach -x = ways to reach 0`

.

In subsequent iterations, we only care about starting from target sums that are already reachable. Assume that the second value is *y*. We know intuitively that after 2 iterations, we can only arrive at 4 possible target sums: `x + y`

, `x --- y`

, `-x + y`

, and`-x --- y`

. Note: These 4 values may not be distinct!

```
class Solution {
public int findTargetSumWays(int[] nums, int S) {
int sum = 0;
for (int num: nums) sum += num;
if (S > sum || -S < -sum) return 0;
int[] dp = new int[2 * sum + 1];
dp[sum] = 1;
for (int num: nums) {
int[] next = new int[2 * sum + 1];
for (int i = 0; i < dp.length; i++) {
// Only branch out from reachable target sums\
if (dp[i] == 0) continue; next[i + num] += dp[i];
next[i - num] += dp[i];
}
dp = next;
}
return dp[sum + S];
}
}
```

This solution uses an array and skips unreachable target sums at each iteration (through the use of the `continue`

statement).

You can also use a HashMap and only iterate through all existing keys, which is *theoretically faster but slower in practice*. It is theoretically more efficient because you don't have to iterate through all `2 * sum + 1`

indices in the array at each iteration of the `nums`

array; however, it seems that the overhead of using the HashMap data structure far outweighs the cost of performing that iteration.

83.83th percentile: we're finally getting somewhere!

We can further optimize our solution in several ways.

Firstly, we can trivially conclude that there are 0 ways to attain a target sum if the target sum exceeds the sum of all the values in the array.

Secondly, we can optimize the space complexity of our algorithm by using only a single array --- see my previous post on space-optimizing the dynamic programming solution. This will most likely also improve the runtime of the algorithm, as we don't need to allocate memory to create a new array at each iteration.

But more importantly, we can simplify this problem into a 0--1 Knapsack Problem. Credits to yuxiangmusic who posted this solution in LeetCode's discussion forum --- it's absolutely genius! I'll try to paraphrase his idea here:

The original problem statement is equivalent to: find the number of ways to gather a subset of `nums`

that needs to be positive (P), and the rest negative (N), such that their sum is equal to `target`

.

Therefore...

```
sum(P) - sum(N) = target\
sum(P) - sum(N) + sum(P) + sum(N) = target + sum(P) + sum(N)
2 * sum(P) = target + sum(nums)
sum(P) = (target + sum(nums)) / 2
```

... the original problem statement may be converted into the following subset sum problem: find the *number of ways* to gather a subset of `nums`

such that its sum is equal to `(target + sum(nums)) / 2`

.

We can also add another trivial check to our algorithm: if `target + sum(nums)`

is not divisible by 2, we can return 0 because there is no solution (this follows from the last line in our derivation above).

Therefore, the crux of this reformulated problem statement is (in pseudo-code):

```
for the set of values up to some value V in nums:\
for some target sum S:\
number of ways to attain S =
number of ways to attain S without V\
+ number of ways to attain S-V with V
```

`number of ways to attain S without V`

is the value in the same column, but 1 row above, in our dynamic programming table.

`number of ways to attain S-V with V`

is the value in the column `V`

columns to the left, but 1 row above, in our dynamic programming table.

Therefore, in code, our final solution looks like this:

```
class Solution {
public int findTargetSumWays(int[] nums, int S) {
int sum = 0;
for (int num: nums) sum += num;
if (S > sum || -S < -sum || (S + sum) % 2 == 1) return 0;
int[] dp = new int[(S + sum) / 2 + 1];
dp[0] = 1;
for (int num: nums) {
for (int i = dp.length - 1; i >= num; i--) {
dp[i] += dp[i - num]; // Crux
}
}
return dp[dp.length - 1];
}
}
```

100th percentile. Perfection.

Honestly, this solution took me a while to wrap my head around. The reformulated problem statement so faintly resembled the original one that I had trouble convincing myself of the *correctness of the solution *(i.e. that solving the reformulated problem solves the original one). I found that the best way to truly understand the solution is to step through it line-by-line, and really understanding what each step does.

This article has been a blast to write, and I think it's amazing how we were able to bring our runtime down from 377ms to 1ms by optimizing our solution. Even just simply caching our solutions (through memoization) reduced our runtime to 8ms, which is an extremely significant improvement.

I hope you had as much fun digesting these solutions as I did writing them!

Sign up for our newsletter list and join over 3,000 brilliant developers leveling up and solving coding challenges daily.

Back