Mark As Completed Discussion

One Pager Cheat Sheet

  • In this tutorial, we'll explore the differences between using Dynamic programming and Greedy algorithms to find optimal solutions to problems, noting that the latter can sometimes lead to suboptimal solutions.
  • The tutorial on dynamic programming provides an example of finding a path through a grid to maximize reward with O(m * n) time complexity and space complexity.
  • The greedy algorithm takes the local best solutions with the hope of approximating the global best solution in a path, but with O(m+n) time complexity and O(1) space complexity, there are no guarantees that it will result in the most optimal accumulated reward.
  • A greedy algorithm can provide the optimal solution for the activity selection problem, which involves selecting a maximum sized set of non-overlapping activities from a set of n activities with given start and finish times.
  • The greedy algorithm for activity selection has O(n) time complexity with O(1) space complexity, assuming the intervals are already sorted by finish times; otherwise, this has O(n log n) time complexity with O(1) space complexity.
  • The goal of the fractional knapsack problem is to find out how to fill a knapsack with a weight capacity X so that the total items in the sack have a maximal value.
  • The Fractional Knapsack Problem has an efficient greedy strategy-based solution with O(n log n) time complexity and O(n) space complexity.
  • The running time complexity of the fractional knapsack problem is O(n log n) and the space complexity is O(n).
  • The greedy algorithm may not find the optimal path to the goal if the goal is changed to (m-1, 0) due to its reliance on an evaluation function to choose the shortest path to the goal.
  • The greedy algorithm for interval scheduling only selects one interval with the earliest finish time in each iteration.