mathematical physics without partial derivatives

As to question 2, there are certainly plenty of non-trivial discrete models in statistical physics, such as the Ising or Potts models, or lattice gauge theories with discrete gauge groups, that require no partial derivatives (or indeed any operations of differential calculus) at all to formulate and simulate.

Similarly, quantum mechanics can be formulated entirely in the operator formalism, and an entity incapable of considering derivatives could still contemplate the time-independent Schrödinger equation and solve it algebraically for the harmonic oscillator (using the number operator) or the hydrogen atom (using the Laplace-Runge-Lentz-Pauli vector operator).

So an answer to question 1 might be "at least anything that can be written as a discrete-time Markov chain with a discrete state space, as well as anything that can be recast as an eigenvalue problem", and other problems that can be recast in purely probabilistic or algebraic language should also be safe (although it might be hard to come up with their formulations without using derivatives at some intermediate step).

As to question 3, I personally don't believe that an approach to classical mechanics or field theory can be correct if it isn't equivalent (at least at a sufficiently high level of abstraction) to formulating and solving differential equations. But the level of abstraction could conceivably be quite high -- for an attempt to formulate classical mechanics without explicitly referring to numbers (!) cf. Hartry Field's philosophical treatise "Science without Numbers".


Well if you take out partial derivatives, at least quantum field theory and in particular conformal field theory will survive the massacre. The reason is explained in my MO answer: $p$-adic numbers in physics

One can use random/quantum fields $\phi:\mathbb{Q}_{p}^{d}\rightarrow \mathbb{R}$ as toy models of fields $\phi:\mathbb{R}^d\rightarrow\mathbb{R}$. In this $p$-adic or hierarchical setting, Laplacians and all that are nonlocal and not given by partial derivatives.

Most equations in physics are local and therefore need partial derivatives in order to be formulated. What should remain, in the very hypothetical scenario proposed in the question, is everything pertaining to nonlocal phenomena.


Is it possible to accurately simulate any non-trivial physics without computing partial derivatives?

Yes. An example is the nuclear shell model as formulated by Maria Goeppert Mayer in the 1950's. (The same would also apply to, for example, the interacting boson model.) The way this type of shell model works is that you take a nucleus that is close to a closed shell in both neutrons and protons, and you treat it as an inert core with some number of particles and holes, e.g., $^{41}\text{K}$ (potassium-41) would be treated as one proton hole coupled to two neutrons. There is some vector space of possible states for these three particles, and there is a Hamiltonian that has to be diagonalized. When you diagonalize the Hamiltonian, you have a prediction of the energy levels of the nucleus.

You do have to determine the matrix elements of the Hamiltonian in whatever basis you've chosen. There are various methods for estimating these. (They cannot be determined purely from the theory of quarks and gluons, at least not with the present state of the art.) In many cases, I think these estimates are actually done by some combination of theoretical estimation and empirical fitting of parameters to observed data. If you look at how practitioners have actually estimated them, I'm sure their notebooks do contain lots of calculus, including partial derivatives, or else they are recycling other people's results that were certainly not done in a world where nobody knew about partial derivatives. But that doesn't mean that they really require partial derivatives in order to find them.

As an example, people often use a basis consisting of solutions to the position-space Schrodinger equation for the harmonic oscillator. This is a partial differential equation because it contains the kinetic energy operator, which is basically the Laplacian. But the reality is that the matrix elements of this operator can probably be found without ever explicitly writing down a wavefunction in the position basis and calculating a Laplacian. E.g., there are algebraic methods. And in any case many of the matrix elements in such models are simply fitted to the data.

The interacting boson model (IBM) is probably an even purer example of this, although I know less about it. It's a purely algebraic model. Although its advocates claim that it is in some sense derivable as an approximation to a more fundamental model, I don't think anyone ever actually has succeeded in determining the IBM's parameters for a specific nucleus from first principles. The parameters are simply fitted to the data.

Looking at this from a broader perspective, here is what I think is going on. If you ask a physicist how the laws of physics work, they will probably say that the laws of physics are all wave equations. Wave equations are partial differential equations. However, all of our physical theories except for general relativity fall under the umbrella of quantum mechanics, and quantum mechanics is perfectly linear. There is a no-go theorem by Gisin that says you basically can't get a sensible theory by adding a nonlinearity to quantum mechanics. Because of the perfect linearity, our physical theories can also just be described as exercises in linear algebra, and we can forget about a specific basis, such as the basis consisting of Dirac delta functions in position space.

In terms of linear algebra, there is the problem of determining what is the Hamiltonian. If we don't have any systematic way of determining what is an appropriate Hamiltonian, then we get a theory that lacks predictive power. Even for a finite-dimensional space (such as the shell model), an $n$-dimensional space has $O(n^2)$ unknown matrix elements in its Hamiltonian. Determining these purely by fitting to experimental data would be a vacuous exercise, since typically the number of observations we have available is $O(n)$. One way to determine all these matrix elements is to require that the theory consist of solutions to some differential equation. But there is no edict from God that says this is the only way to do so. There are other methods, such as algebraic methods that exploit symmetries. This is the kind of thing that the models described above do, either partially or exclusively.

References

Gisin, "Weinberg's non-linear quantum mechanics and supraluminal communications," http://dx.doi.org/10.1016/0375-9601(90)90786-N , Physics Letters A 143(1-2):1-2