Why do PDE's seem so unnatural?

The big problem with PDEs that makes them so difficult is geometry. ODEs are fairly natural because we only have to consider a few cases when it comes to the geometry of the set and the known information about it. We can find general solutions to many (linear) ODEs because of this, and so there's usually a natural progression to get there.

PDEs on the other hand, have at least 2-dimensional independent variables, so the variety in the kinds of domains is increased from just intervals to any reasonable connected domain. This means that initial and boundary values contain a lot more information about the solution, so a general solution would have to take into account all possible geometries. This is not really possible in any meaningful way, so there usually aren't general solutions.

When we do pick a geometry, it often simplifies the problem significantly. One nice domain is $\mathbb{R}^n$. Many simple PDEs have invariance properties, which means that if we're given enough space to "shift" and "scale" parts of the equation, we can probably come to reason about what the solution should look like. For these situations, there may be general solutions (see PDEs on unbounded domains). These solutions are also more of the straightforward kinds of solutions we see in ODEs.

Many PDEs and ODEs simply don't have closed form solutions, and so usually rely on series methods and other roundabout ways to write solutions which don't really "look" like solutions.

Separation of variables is a kind of reasonable guess that the effect of each independent variable should be independent in some way. We can try writing the solution as a sum or a product or some other combination of independent functions of each independent variable, and this often reduces the problem in some way which allows up to separate the PDE into a series of ODEs. We don't know that this will work in every case, but if we can show uniqueness of a solution, then finding any kind of solution means we found the solution to the problem.

The last main reason is that the theory of PDEs is way harder than the theory of ODEs. So, when you're first learning to solve ODEs, you can be introduced to these methods with a bit of theory and some background on why each of the guesses and techniques makes some sense. When first learning to solve PDEs, however, you probably will not have anywhere near the amount of background you need to fully understand the problems. You can be taught the methods, but they will always seem like a random guess or just a technique that happens to work, until you learn about the theory behind it. As Eric Towers mentions, some Lie algebra would be a good place to start, and I would also recommend PDE books with a more theoretical slant to them, such as Lawrence Evans' text. Since you seem to have some background in real analysis (and so presumably some basic modern/abstract algebra), I think both of these paths should be achievable at your level.


I would strongly recommend learning about Lie symmetry analysis of differential equations.

  • Even in ODEs, the sentence structure you describe is common. "If $N_x = M_y$, then the equation is exact and we can..."
  • All the techniques from ODEs and all the techniques you mention for PDEs can be expressed in this one framework.
  • Computer technology is up to the task of computing the (algebraically horrendous) prolongations needed to calculate anything. This was definitely not so in Lie's time.

Starting places (Links are to the publisher, you can find these other places):

  • Bluman and Kumei, Symmetries and Differential Equations
  • Olver, Applications of Lie Groups to Differential Equations

Additionally, if you get your head wrapped around these ideas, you will have a head start understanding Galois theory. (Groups of symmetries holding the set of solutions of a differential frame invariant are remarkably analogous to groups of symmetries holding the set of roots of a polynomial invariant.)


I'd tend to agree that "random" PDE's can seem un-natural, or, at least ... random. But, as with other parts of mathematics, in my experience, even though we might like to prove the most general results possible without more effort, we don't really care nearly as much about random items as we do about things that arise in the course of relatively serious (mathematical and otherwise) enterprises.

Also, technical possibilities constrain what we can do, which, as it happens oftens leads to consideration of linear PDEs, at least as linearized versions of possibly more "genuine" PDEs that may better describe physical, geometric, or other phenomena.

Then, if we throw in natural symmetries, on Euclidean spaces we quickly find the three canonical partial differential operators: Laplacian $\Delta=\sum_i \partial^2/\partial x^2$, the heat operator $\Delta-\partial/\partial t$, and the wave operator $\Delta-\partial^2/\partial t^2$.

The constant-coefficient aspect is due to being translation-invariant, which is a common/reasonable feature. And so on.

And, as we know, to systematically describe the process of "solve differential equation with boundary conditions..." we often can use, construct, or prove existence of a "Green's function", whose description and properties are probably best understood in terms of distributions, a.k.a., generalized functions. And the latter are usefully discussed using Fourier series, Fourier transforms, and other eigenfunctions expansions when available.

So, in my opinion, these basic things are just more calculus, and are nearly universally useful.