Why is Dantzig's solution to the knapsack problem only approximate
It doesn't work because it doesn't work.
Say you have $W=10$ and there are three kinds of objects:
- Type $A$ has weight 9 and value 90.
- Type $B$ has weight 2 and value 19.
- Type $C$ has weight 1 and value 1.
If you look at $v/w$, type $A$ wins, so you take one type $A$ object and fill up the remaining space with one type $C$, for total value $91$.
But the optimal solution takes five type $B$ objects for total value $95$.
We might characterize the general difficulty as follows: A choice that seemed good early on ran into trouble later, because our early choice of $A$, optimal in the short term, forces us to take the very bad $C$ afterwards. The optimal solution takes many choices that seem suboptimal in the short run, but which work well together.
This is quite typical of NP-complete problems. They have many parts that interact in complicated ways, so that the effects of a particular choice in one part of the problem can't be localized, but have far-reaching implications for choices made elsewhere. Compare this with the prototypical NP-complete problem, CNF-SAT, where the parts are logical clauses containing several components each, but each component may appear in multiple clauses. Each choice about the value of one component of one clause may have far-reaching effects on the other clauses.
Or similarly, consider bin packing. Items that fit optimally into the first bin may turn out to be just what was needed to fill out the space in later bins, so that an optimal packing of the first bin may get a short-term win at the expense of the overall packing; an overall optimal solution may pack the first bin suboptimally in order to save certain special items for later on.
Just to be clear: If you fill up items according to their value/weight ratio until you cannot fit any more item in this order, the result is optimal, if the accumulated weight by then matches exactly the available space. In other words, it is ALWAYS optimal for the total weight it accumulated. I guess this is the result you had in mind. It is not hard to prove and can be found in any introduction to knapsack problems, for example in "Martello, S. & Toth, P. Knapsack problems: algorithms and computer implementations. John Wiley & Sons, 1990"
However, as discussed in the first answer and the comments there, your method obviously isn't always optimal if it ends without filling the bag completely. Think of $W=10^n$ for large $n$, and items $A$(weight $1$, value $1$) $B$(weight $10^n$, value $10^n-1$). Then your algorithm ends up with $A$ and a value of $1$, when it could instead have had $B$ with a value of $10^n-1$. Nevertheless, $A$ is trivially optimal for the weight it achieves, i.e. optimal for a knapsack with total weight $1$.