Black hole no-hair theorems vs. entropy and surface area
Here is my answer. I should preface it by warning that this is a subject that can provoke intense discussion, and I'm sure there are physicists would would disagree. You should be aware that I'm an expert on thermodynamics but not on general relativity.
But basically, as far as I understand it, the process of converting matter into black-hole-stuff is an irreversible one, in the usual macroscopic sense. Throwing your boxes of salt into identical black holes is somewhat analogous to what would happen if you emptied them into two identical vats of water. You would end up with two identical vats of salty water, with the same mass, temperature, and salt concentration, and the same entropy.
The no hair theorem for black holes is an asymptotic one. It says that if you throw some stuff into a black hole and wait long enough, the black hole will become an arbitrarily good approximation to an "ideal" black hole (which is to say, a black hole solution of Einstein's equations), which can can be completely described by its mass, charge and spin. It also says (I believe) that this convergence happens rather rapidly. But converging towards something is not the same as ever actually reaching it. In reality, nothing can cross the event horizon as seen from an outside perspective (see my answer to this question), it just gets very hard to detect because its light is red-shifted to extremely long wavelengths.
So in my view the apparent loss of information comes from assuming that the black hole actually becomes an ideal one rather than just closely approximating it. It's very similar to the question of how the entropy of an isolated vat of salt+water can increase as the salt dissolves, even though on the microscopic level, the laws of physics seem to preserve information. The resolution is that when you switch to a macroscopic description (in terms of temperature, pressure etc.), you throw away some information about the microscopic state. After the salt has dissolved, the information about its previous state (crystal or powder) is still there, but it's hidden in fine correlations between the molecules' motions. When you choose to describe the final state as an equilibrium ensemble you're basically admitting that those fine correlations can never practically be measured, and therefore choosing to ignore them. Similarly, when you choose to approximate a real black hole as an ideal one, you're basically choosing to ignore any information about what kind of salt was thrown into it in the past, on the basis that there's no longer any practical way to recover it. In both cases, the fundamental reason for the increase in entropy is the same.
Note that I'm not saying the box's entropy increases as it passes the event horizon. I'm actually saying that the box never crosses the event horizon, as seen from an outside point of view. That would take an infinite amount of time. However, the outside observer would very rapidly find the box very hard to see due to the red-shifting. At some point you, as the observer, might decide as an approximation that the box might as well have crossed the event horizon, since you basically can't detect it anymore. When you do this, your approximation has a higher entropy than the "real" black hole, and that's where the entropy increase comes from.
That might seem like a weird concept. But in fact all increases in entropy are due to approximations of one kind or another. In principle you could always reverse the velocities of every particle making up a system and watch it "run backwards in time" to its initial state (unscrambling an egg or whatever). So the information about the initial conditions is always still there. We just treat things as irreversible (i.e. information-destroying or entropy-producing) because it's a very useful approximation that helps us make predictions about macroscopic systems.
Of course, the observer falling in with the box of salt would not want to make the same approximation as the outside observer. It would be a bad approximation from the infalling observer's point of view, because she can still see the box perfectly clearly. (If it's a big enough black hole it won't even get torn apart.) But that's ok - although we often treat it as an observer-independent physical quantity, entropy is actually observer-dependent, even for everyday things like gases. See this rather wonderful paper by Edwin Jaynes. (Jaynes, E. T., 1992, `The Gibbs Paradox, ' in Maximum-Entropy and Bayesian Methods, G. Erickson, P. Neudorfer, and C. R. Smith (eds.), Kluwer, Dordrecht).