How is dynamic programming different from greedy algorithms?

The key distinction is that greedy algorithms compose solutions "statically" in the sense that each local choice in the solution can be finalized without needing to know anything about the other local choices made. Dynamic algorithms, however, create sets of possible solutions to sub-problems and only generate a single solution to the global problem when all sub-problems have been considered. The Wikipedia page on greedy algorithms puts it well:

The choice made by a greedy algorithm may depend on choices made so far but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices. This is the main difference from dynamic programming, which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage, and may reconsider the previous stage's algorithmic path to solution.


Dynamic programming is applicable to problems exhibiting the properties of:

  • overlapping subproblems, and
  • optimal substructure.

Optimal substructure means that you can greedily solve subproblems and combine the solutions to solve the larger problem. The difference between dynamic programming and greedy algorithms is that with dynamic programming, there are overlapping subproblems, and those subproblems are solved using memoization. "Memoization" is the technique whereby solutions to subproblems are used to solve other subproblems more quickly.

This answer has gotten some attention, so I'll give some examples.

Consider the problem "Making change with dollars, nickels, and pennies." This is a greedy problem. It exhibits optimal substructure because you can solve for the number of dollars. Then, solve for the number of nickels. Then the number of pennies. You can then combine the solutions to these subproblems efficiently. It does not really exhibit overlapping subproblems since solving each subproblem doesn't help much with the others (maybe a little bit).

Consider the problem "Fibonnaci numbers." It exhibits optimal substructure because you can solve F(10) from F(9) and F(8) efficiently (by addition). These subproblems overlap because they both share F(7). If you memoize the result of F(7) when you're solving F(8), you can solve F(9) more quickly.

In response to the comment about dynamic programming having to do with "reconsidering decisions": This is obviously not true for any linear dynamic programming algorithm like the maximum subarray problem or the Fibonacci problem above.

Essentially, imagine a problem having optimal substructure as a directed acyclic graph whose nodes represent subproblems (wherein the whole problem is represented by a node whose indegree is zero), and whose directed edges represent dependencies between subproblems. Then, a greedy problem is a tree (all nodes except the root have unit indegree). A dynamic programming problem has some nodes with indegree greater than one. This illustrates the overlapping subproblems.


The difference is that dynamic programming requires you to remember the answer for the smaller states, while a greedy algorithm is local in the sense that all the information needed is in the current state. Of course, there is some intersection.