Is the historical method of teaching physics a "legitimate, sure and fruitful method of preparing a student to receive a physical hypothesis"?
The historical method is necessary for several reasons:
- The historical literature shows you how the correct arguments won out over time. This is pretty much the only way to persuade yourself that the results we have are correct, since when you are learning something you are recapitulating the historical process, and all the confusions people had in the past will be in your head at some point. It is very difficult to get rid of the bad ideas by using textbooks, because the textbooks are written by people who already believe the result. So if you want to know how come we believe quantum mechanics works, or how come we believe GR, you need to read the historical literature. The arguments in textbooks are only persuasive to the converted.
- The historical literature is unfortunately not preserved well in most textbooks. The authors of textbooks do not paraphrase the original articles, but try to present their own arguments. These arguments don't need to persuade anyone, since they are preaching to the choir. So the arguments tend to decay over time. There are rare textbooks, like Landau and Lifschitz, or Feynman, where the arguments are generally better, but still, even in the best books, you don't get universally correct arguments. It is amazing how many times someone comes to you with "The argument in this and so textbook is wrong, therefore relativity is wrong". Usually, the argument is wrong, but the result is perfectly fine. The original author is usually better, since this author had to at least persuade the referee and the editor.
- There are forgotten methods and ideas: if you don't read Gell-Mann, you don't learn current commutators (at least not well, or outside of 2d). If you don't read Kadanoff and Polyakov, you don't learn about OPE closure in higher dimensions determining the anomalous dimensions/critical-exponents (only the most fruitful 2d version, where the OPE is constrained by conformal invariance, has been preserved in textbooks) If you don't read Schwinger and Feynman, you don't understand the particle picture (this is particularly unforgivable). If you don't read Mandelstam, you don't learn S-matrix methods (this is a crime against a whole generation). This is a disease of transmission--- the authors simply wish to transmit the shortest path to the most famous calculation, and the ideas get trod on shamelessly, and the textbooks are nearly always terrible caricatures. There are exceptions, like Weinberg's field theory books, but this is because the author in this case is steeped in the literature, is personally responsible for a large chunk of it, and has his own take on many of the essential results.
- By reading good physicists of the past, you learn what a good physics argument reads like. This is important too. There is no better guide to scientific style than Einstein.
The old literature will introduce you to many problems which used to be considered extremely important, but then were put on the back-burner. Usually this is because 10 years go by or more, and nobody has any ideas. There are always many of these, and they were inspirations to many great developments. For example:
- Tony Skyrme was directly inspired by the then-ancient and completely discredited ideas of Lord Kelvin on topological ether-knot atoms to make his famous model of the proton as a topological defect in the recently discovered pion condensate. Balachandran, Nair, and Rajeev revived this idea in the context of large N, and Witten made it stick, and this was also an exercise in resuscitating a dead idea.
- Gutzwiller was completing an old problem, left unsolved in the early days of quantum mechanics, of finding the semiclassical quantization of chaotic systems. Einstein was already wondering about this in the 1910s, the Bohr-Sommerfeld quantization assumed the classical system was integrable. The Gutzwiller trace formula is one of the central discoveries of the late 20th century, and it was musty stuff when it was formulated.
- Schwarz and Scherk revived the ideas of Kaluza and Klein in the late 1970s to make sense of string theory. The unified field theory ideas were, by 1970, firmly in the dustbin.
- Kolmogorov attacked the ancient problem of solar-system stability, and showed that perturbation theory can be summed away from resonances. This was turned into the KAM theorem over several decades.
- Widom recently considers the thermodynamics of three-phase interfaces, a problem which was considered in the 19th century but neglected forever, until Widom revived it.
This is a seriously truncated list, because so many developments are resuscitated old problems. Most of the literature is expanding on an old problem which was left unsolved for some reason or another. Unfortuntely, many of the problems considered unsolved in earlier eras were just solved later, and you have to know the later developments to know which are still open.
Here is a short woefully incomplete list of dead problems which were never solved and are no longer on anyone's mind as far as I know:
- Mandelstam's double dispersion relations: How do these work? What the heck are they? What do they mean? This was left unsolved and nobody works on it.
- Regge degeneracies: I asked about this here, why are the even and odd trajectories degenerate in QCD? This was a pressing issue in the 1960s. Now, nobody knows and nobody cares (hopefully this will change).
- What is the sigma f(660)? Is it a real particle? Again, this is an off and on question.
- What is going on with the idea of pomeron quark coupling? Is this for real? Is there such a thing as a pomeron quark vertex? This seems like nonsense, but it predicts that pion-proton total cross sections are 2/3 the proton proton cross section, and this is experimentally more or less true. But the idea contradicts Gribov's idea of a universal coefficient. Is Gribov's idea true? Or is there a pomeron quark coupling? Nobody knows and nobody cares.
There are tons more. These old questions are sources of modern ideas, as the ideas come back in and out of fasion. You can't do anything without knowing all of these.
It is also a terrible disservice to previous generations to ignore what they wrote. Unfortunately, time is limited, and the old papers can have suboptimal notation and confusing presentation, due to the dead weight of the era's conventions. This is why I think it is useful to make quick summaries of the entire content of the old papers, using modernized notation and methods, while keeping all the ideas. This is a difficult art, modernizing the papers, but it can be done. In principle this is the job of textbooks, but textbook authors are never going to do this, so one should do it as well as possible.
The other reason to read the old literature is that there are certain things that pop into one's head that are old, which you won't realize were the same things that popped into the original author's ideas. For example, when I was learning statistical mechanics, I was disappointed that the statistical description was on phase space. I thought "why don't you just make a probability distribution for the particle to be at position x and have velocity v and find the equation for the distribution". Of course, this was the original idea of Boltzmann, and the modern theory evolved from this starting point, not the other way around. But I couldn't learn statistical mechanics until I saw somebody do the obvious thing (the Boltzmann equation) first.
There are countless other retreds over old territory:
- Every decade or so, somebody rediscovers the fact that classical EM can be made symmetric between E and B by introducing a second vector potential. The result is that the photon is doubled, and there are two separate U(1) gauge symmetries, and this is not the right way to do monopoles (it isn't topological and it doesn't give Dirac quantization). This thing is published again and again, and it is massively annoying.
- For 3 decades after 1957, people kept on rediscovering the Everett interpretation. Each person was sure their own rediscovery was different, because the Everett interpretation was misrepresented in the literature until the internet came along. So you get "many minds" and "decoherence" and "consistent histories" (although Gell-Mann always cites Everett), and so on, all describing what is essentially the same idea. I met a grad student once who had decided to recast all of quantum mechanics in information theoretic terms, and had come up with a new interpretation of quantum mechanics. A quick reference to Everett showed him that his ideas were all essentially present in Everett's thesis.
There are downsides to studying too much history. The biggest downside is that it kills the motivation for trying out new ideas, because the good historical stuff seems so different from the shoddy stuff it seems you always think of (of course, this is hindsight working). It is also good to keep in touch with what other people are doing, since the community generally has an intelligence regarding what the fruitful things to study are, an intelligence which is greater than any individual.
But the community can also be persuaded to run after nonsense too, like large extra dimensions, so one has to be careful. History makes you brave in this regard, since you can see when arguments are wrong, because when something is wrong, seeing that it is wrong takes only an old argument. Of course, when something is right, it is always completely different from what came before.
Anyway, I am on the side of "yes". You should know the history.