Why mergesort is not Dynamic programming
For dynamic programming to be applicable to a problem, there should be
i. An optimal structure in the subproblems:
This means that when you break down your problem into smaller units, those smaller units also need to be broken down into yet smaller units for an optimal solution. For example, in merge sort, a array of numbers can get sorted if we divide it into two subarrays, get them sorted and combine them. While sorting these two subarrays, repeat the same process you followed in the previous sentence. So an optimal solution (a sorted array) is got when we find an optimal solution to its subproblems (we sort the subarrays and combine them). This requirement is fulfilled for merge sort. Also the subproblems must be independent for them to follow an optimal structure. This is also fulfilled by merge sort as the subproblems' solutions do not get affected by each others' solutions. For example, the solutions to the two parts of an array are not affected by each other's sortedness.
ii. Overlapping subproblems:
This means that while solving for the solution, the subproblems you formulate get repeated, and hence need only be solved once. In the case of merge sort, this requirement will be met only rarely in the normal case. An array of numbers like 2 1 3 4 9 4 2 1 3 1 9 4 may be a good candidate for overlapping subproblems for merge sort. In this case, the solution to the subproblem sort(2 1 3) can be stored in a table to be reused, because it will be called twice during the computation. But as you can see, there is a very slim chance that a random array of numbers will have this kind of a repeated contrivance. So it would only be inefficient if we used a dynamic programming technique like memoization for an algorithm like merge sort.
Yes. Dijkstra's algorithm uses dynamic programming as mentioned by @Alan in the comment. link
Yes. If I may quote Wikipedia here, "Dynamic programming is widely used in bioinformatics for the tasks such as sequence alignment, protein folding, RNA structure prediction and protein-DNA binding." 1
1 https://en.wikipedia.org/wiki/Dynamic_programming
The key words here are "overlapping subproblems" and "optimal substructure". When you execute quicksort or mergesort, you are recursively breaking down your array into smaller pieces that do not overlap. You never operate over the same elements of the original array twice during any given level of the recursion. This means there is no opportunity to re-use previous calculations. On the other hand, many problems DO involve performing the same calculations over overlapping subsets, and have the useful characteristic that an optimal solution to a subproblem can be re-used when computing the optimal solution to a larger problem.
Dijkstra's algorithm is a classic example of dynamic programming, as it re-uses prior computations to discover the shortest path between two nodes A and Z. Say that A's immediate neighbors are B and C. We can find the shortest path from A to Z by summing the distance between A and B with our computed shortest path from B to Z; and do similarly for finding the shortest path from C to Z. Then the shortest path from A to Z will be the shorter of these two paths. The key insight here is that we can re-use the shortest path computations for paths of length 2 when computing the shortest paths of length 3, and so on. Doing so results in a much more efficient algorithm.
Dynamic programming can be used to solve many types of problems -- see http://en.wikipedia.org/wiki/Dynamic_programming#Examples:_Computer_algorithms for some examples.