If memoization is top-down depth-first, and DP is bottom-up breadth-first, what are the top-down breadth-first / bottom-up depth-first equivalents?
Let's analyse what the edges in the two graphs mean. An edge from subproblem a to b represents a relation where a solution of b is used in the computation of a and must be solved before it. (The other way round in the other case.)
Does topological sort come to mind?
One way to do a topological sort is to perform a Depth First Search and on your way out of every node, process it. This is essentially what Recursive memoization does. You go down Depth First from every subproblem until you encounter one that you haven't solved (or a node you haven't visited) and you solve it.
Dynamic Programming, or bottom up - breadth first problem solving approach involves solving smaller problems and constructing solutions to larger ones from them. This is the other approach to doing a topological sort, where you visit the node with a in-degree of 0, process it, and then remove it. In DP, the smallest problems are solved first because they have a lower in-degree. (Smaller is subjective to the problem at hand.)
The problem here is the generation of a sequence in which the set of subproblems must be solved. Both top-down breadth-first and bottom-up depth-first can't do that. Top-down Breadth-first will still end up doing something very similar to the depth-first counter part even if the process is separated into threads. There is an order in which the problems must be solved. A bottom-up depth-first approach MIGHT be able to partially solve problems but the end result would still be similar to the breadth first counter part. The subproblems will be solved in a similar order.
Given that these approaches have almost no improvements over the other approaches, do not translate well with analogies and are tedious to implement, they aren't well established.
@AndyG's comment is pretty much on the point here. I also like @shebang's answer, but here's one that directly answers these questions in this context, not through reduction to another problem.
It's just not clear what a top-down, breadth-first solution would look like. But even if you somehow paused the computation to not do any sub-computations (one could imagine various continuation-based schemes that might enable this), there would be no point to doing so, because there would be sharing of sub-problems.
Likewise, it's unclear that a bottom-up, depth-first solution could solve the problem at all. If you proceed bottom-up but charge all the way up some spine of the computation, but the other sub-problems' solutions aren't already ready and lying in wait, then you'd be computing garbage.
Therefore, top-down, breadth-first offers no benefit, while bottom-up, depth-first doesn't even offer a solution.
Incidentally, a more up-to-date version of the above blog post is now a section in my text (this is the 2014 edition; expect updates.