Differentiating Propagator, Green's function, Correlation function, etc

The main distinction you want to make is between the Green function and the kernel. (I prefer the terminology "Green function" without the 's. Imagine a different name, say, Feynman. People would definitely say the Feynman function, not the Feynman's function. But I digress...)

Start with a differential operator, call it $L$. E.g., in the case of Laplace's equation, then $L$ is the Laplacian $L = \nabla^2$. Then, the Green function of $L$ is the solution of the inhomogenous differential equation $$ L_x G(x, x^\prime) = \delta(x - x^\prime)\,. $$ We'll talk about its boundary conditions later on. The kernel is a solution of the homogeneous equation $$ L_x K(x, x^\prime) = 0\,, $$ subject to a Dirichlet boundary condition $\lim_{x \rightarrow x^\prime}K(x,x^\prime) = \delta (x-x^\prime)$, or Neumann boundary condition $\lim_{x \rightarrow x^\prime} \partial K(x,x^\prime) = \delta(x-x^\prime)$.

So, how do we use them? The Green function solves linear differential equations with driving terms. $L_x u(x) = \rho(x)$ is solved by $$ u(x) = \int G(x,x^\prime)\rho(x^\prime)dx^\prime\,. $$ Whichever boundary conditions we what to impose on the solution $u$ specify the boundary conditions we impose on $G$. For example, a retarded Green function propagates influence strictly forward in time, so that $G(x,x^\prime) = 0$ whenever $x^0 < x^{\prime\,0}$. (The 0 here denotes the time coordinate.) One would use this if the boundary condition on $u$ was that $u(x) = 0$ far in the past, before the source term $\rho$ "turns on."

The kernel solves boundary value problems. Say we're solving the equation $L_x u(x) = 0$ on a manifold $M$, and specify $u$ on the boundary $\partial M$ to be $v$. Then, $$ u(x) = \int_{\partial M} K(x,x^\prime)v(x^\prime)dx^\prime\,. $$ In this case, we're using the kernel with Dirichlet boundary conditions.

For example, the heat kernel is the kernel of the heat equation, in which $$ L = \frac{\partial}{\partial t} - \nabla_{R^d}^2\,. $$ We can see that $$ K(x,t; x^\prime, t^\prime) = \frac{1}{[4\pi (t-t^\prime)]^{d/2}}\,e^{-|x-x^\prime|^2/4(t-t^\prime)}, $$ solves $L_{x,t} K(x,t;x^\prime,t^\prime) = 0$ and moreover satisfies $$ \lim_{t \rightarrow t^\prime} \, K(x,t;x^\prime,t^\prime) = \delta^{(d)}(x-x^\prime)\,. $$ (We must be careful to consider only $t > t^\prime$ and hence also take a directional limit.) Say you're given some shape $v(x)$ at time $0$ and want to "melt" is according to the heat equation. Then later on, this shape has become $$ u(x,t) = \int_{R^d} K(x,t;x^\prime,0)v(x^\prime)d^dx^\prime\,. $$ So in this case, the boundary was the time-slice at $t^\prime = 0$.

Now for the rest of them. Propagator is sometimes used to mean Green function, sometimes used to mean kernel. The Klein-Gordon propagator is a Green function, because it satisfies $L_x D(x,x^\prime) = \delta(x-x^\prime)$ for $L_x = \partial_x^2 + m^2$. The boundary conditions specify the difference between the retarded, advanced and Feynman propagators. (See? Not Feynman's propagator) In the case of a Klein-Gordon field, the retarded propagator is defined as $$ D_R(x,x^\prime) = \Theta(x^0 - x^{\prime\,0})\,\langle0| \varphi(x) \varphi(x^\prime) |0\rangle\, $$ where $\Theta(x) = 1$ for $x > 0$ and $= 0$ otherwise. The Wightman function is defined as $$ W(x,x^\prime) = \langle0| \varphi(x) \varphi(x^\prime) |0\rangle\,, $$ i.e. without the time ordering constraint. But guess what? It solves $L_x W(x,x^\prime) = 0$. It's a kernel. The difference is that $\Theta$ out front, which becomes a Dirac $\delta$ upon taking one time derivative. If one uses the kernel with Neumann boundary conditions on a time-slice boundary, the relationship $$ G_R(x,x^\prime) = \Theta(x^0 - x^{\prime\,0}) K(x,x^\prime) $$ is general.

In quantum mechanics, the evolution operator $$ U(x,t; x^\prime, t^\prime) = \langle x | e^{-i (t-t^\prime) \hat{H}} | x^\prime \rangle $$ is a kernel. It solves the Schroedinger equation and equals $\delta(x - x^\prime)$ for $t = t^\prime$. People sometimes call it the propagator. It can also be written in path integral form.

Linear response and impulse response functions are Green functions.

These are all two-point correlation functions. "Two-point" because they're all functions of two points in space(time). In quantum field theory, statistical field theory, etc. one can also consider correlation functions with more field insertions/random variables. That's where the real work begins!


It has been many years since you asked this question. I assume that over time you have compiled meaning definitions and distinctions for the other terms in your list. However, there are terms not defined by @josh's response (A response which I have relied on multiple times, thank you for posting it @josh). Personally, my background is in Lattice QCD, which is both a quantum field theory and statistical field theory. So I have also had to sit down and organize the meanings of all these terms. I give a much more directed discussion of these concepts in regards to thermodynamic partition fxn and free energy, $F$ in (Susceptibilities and response functions). Here's the BIG picture I have come up with during my PhD program.

----The Short and Sweet----

  • The problem is that a lot of people are confused about this and so OFTEN times people just define their own lingo. If you assume the free field and linear response limit, then propagators, Green functions (fxns), and linear response fxns are the same. When you include some interaction term or start shifting complex poles around these things become murky. To be facetious, everything is the same if you don't want to think too hard about it, hence why there's so much confusion.

  • First and foremost, the propagator is the transition amplitude of a particle from spacetime coordinate $x$ to spacetime coordinate $y$ (Le Bellac, Wiki).

  • The propagator of a non-interacting field theory IS the Green function (fxn).

  • The propagator of an interacting field theory is a convolution between the non-interacting theory's Green function and a "spectral function" (Kallen-Lehmann Spectral representation). Thus the propagator is either a green fxn or a linear combination of green fxn's... Easy!

  • The adjectives "causal/retarded" and "Feynman" can be applied to either propagators or Green fxn's. They describe the contour integration around the poles of of the propagator or green fxn. This is discussed in David Tong's QFT Lecture notes and G.K. here ( Causal propagator and Feynman propagator ).

  • Generally retarded/causal $n$-point fxns can be expressed (Peskin v.s. Tong Lectures & Wiki respectively): $$ D_{Retarded} = \Theta(x^0-y^0) \left< [\phi(x), \phi(y)] \right> $$ $$ D_{Retarded} = \Theta(x^0-y^0) \left< \phi(x), \phi(y) \right> $$ These propagators satisfy the causal property so they are also a linear response function $\chi$ (tong).

  • The Feynman a.k.a. Time ordered propagator has a uniform convention in the literature. $$ D_{Feynman} = \Theta(x^0-y^0) \left< \phi(x), \phi(y) \right> + \Theta(y^0-x^0) \left< \phi(y), \phi(x) \right> = \left< \mathcal{T} \phi(x) \phi(y) \right>$$

  • The Wightman function is by definition just a correlation function (Peskin, Zee, Zuber, Huang). Nothing special, except that they are the building blocks of other propagators. $$\Delta^{(+)} = \left< \phi(x) \phi(y) \right>$$ $$\Delta^{(-)} = \left< \phi(y) \phi(x) \right>$$ $$ D_{Retarded} = \Theta(x^0-y^0) \left( \Delta^{(+)} - \Delta^{(-)} \right)$$ $$ D_{Feynman} = \Theta(x^0-y^0) \Delta^{(+)} - \Theta(y^0-x^0) \Delta^{(-)}$$

  • Lastly, all propagators, Green fxns, Wightman, and linear response fxns can ALWAYS be understood as 2pt-correlation functions (discussed at length below).

----Linear Response Fxns are 2pt correlation fxns----

I'll start with Kubo formulae. This derivation follows Tong "Kinetic Theory", Gale $\&$ Kapusta. Assume we have some system at equilibrium and apply a small perturbation to it. This looks like an equilibrium Hamiltonian $H_0$ and the perturbation $V_I$, $$H(t) = H_0 + V_I(t) $$ For this example, let it be that we have applied an electric field to a wire. Then the linear response function will end up being the conductivity. We write the interaction potential as some source term, $\phi$ (time dependent, external, c-valued, scalar field) multiplied by an an observable, $J$ like, $$V_I(t) = \phi(t) J(t)$$

Now consider the expectation value of the observable, $J(t)$ after perturbation $V_I(t)$ is applied. $$\left< J(t) \right> = \left< U^{-1}(t,t_0) J(t) U(t,t_0) \right>_{eq} $$ Where by the Schwinger-Dyson series (https://en.wikipedia.org/wiki/Dyson_series) we have that $U^{-1}(t,t_0) = \mathcal{T}\exp(- i \int_{t_0}^t dt' V_I(t'))$, which to linear order gives: $$\left< J(t) \right> \approx \left< \left(1 + i \int_{t_0}^t dt' V_I(t') \right) J(t) \left(1 - i \int_{t_0}^t dt' V_I(t') \right) \right>_{eq} $$ We can expand this expectation value by distribution property and dropping the non-linear term $\propto \left( \int_{t_0}^t dt' V_I(t') \right)^2$. We are left with, $$\left< J(t) \right> \approx \left< J(t) \right>_{eq} + \left< i \int_{t_0}^t dt' V_I(t') J(t) - i \int_{t_0}^t dt' J(t) V_I(t') \right>_{eq} $$ $$\left< J(t) \right> \approx \left< J(t) \right>_{eq} + i \left< \int_{t_0}^t dt' [ V_I(t'), J(t) ] \right>_{eq} $$ Insert definition of $V_I$ from above and subtract equilibrium value of observable $$\left< J(t) \right> - \left< J(t) \right>_{eq} = \delta \left< J(t) \right> \approx i \int_{t_0}^t dt' \phi(t') \left< [ J(t'), J(t) ] \right>_{eq} $$ Let the source be turned on infinitely long ago ($t_0 \rightarrow -\infty$) and insert heavy-side function ($t \rightarrow \infty$). $$\delta \left< J(t) \right> \approx i \int_{-\infty}^{\infty} dt' \Theta(t-t') \phi(t') \left< [ J(t'), J(t) ] \right>_{eq} $$ We can group terms to define the linear response function, $\chi$. Where due to time translation invariance, $$i \Theta(t-t') \left< [ J(t'), J(t) ] \right>_{eq} = \chi (t',t) = \chi (t' - t)$$ Thus we arrive at our final expression. $$\delta \left< J(t) \right> \approx \int_{-\infty}^{\infty} dt' \phi(t') \chi (t'- t) $$ We see here that $[ J(t'), J(t) ] = J(t')J(t) - J(t)J(t')$ so the linear response function is equivalent to a 2pt correlation function. Furthermore the form $i \Theta(t-t') \left< [ J(t'), J(t) ] \right>_{eq}$ matches Peskin's definition of the retarded green function, (a.k.a. free field propagator)

We can also generalize, to when the observable in the expectation value and the observable in the observable in the Hamiltonian aren't the same observable. The observable being measured isn't the observable coupled to the source term. For example, $$\left< \mathcal{O}_i(t) \right> \approx \left< \mathcal{O}_i(t_0) \right>_0 + i \int dt' \phi(t') \left< [ \mathcal{O}_j(t'), \mathcal{O}_i(t_0) ] \right> $$ Then you are calculating a cross-correlation function.

----Propagators are 2pt correlation fxns----

The Functional Formalism of QFT will show us that the propagator is a 2pt-correlation function.

To arrive at the QFT functional formalism we start from the path-integral formulation of Quantum mechanics transition amplitude and add a source term (THIS IS WHERE @josh ENDED HIS ANSWER, so we're just picking up where he left off... see also https://en.wikipedia.org/wiki/Path_integral_formulation#Path_integral_formula) $$ \mathcal{Z}[J] = \int D_{\phi} e^{-S_E[\phi] + i\int d^4x J[x]\phi[x])} $$ Exactly as in our linear response discussion, our source term is a field $\phi$, with an observable/current $J$. Note that to our wick rotated Euclidean Action $S_E$ is equivalent to the Hamiltonian http://www.math.ucr.edu/home/baez/classical/spring_garett.pdf) So that $\mathcal{Z}[J]$ is not only a transition amplitude, but a generalized partition function. Essentially we have associated a Boltzmann factor to every possible field configurations. This Boltzmann factor defines a probability measure known as the Gibbs Measure. $$ \mathcal{Z}[J] = \int D\mu\{x\} e^{ \int d^4x J[x]\phi[x]}= \mathbb{E}\left[ \exp[i\int d^4x J[x]\phi[x] ]\right] $$ $$ D\mu\{x\} = D_{\phi} \frac{e^{-S_E[\phi]}}{\mathcal{Z}[0]} $$ Using the Gibb's Measure we now see that the generating functional is the Moment Generating function from probability theory whose argument is a set of stochastic variables (the quantum fields $\phi[x]$).

A $\#$pt-correlation function (shortened to $\#$pt-function) can be expressed via functional derivatives of the generating functional. $$ \left< \prod_k \phi[x_k] \right> = (-i)^n\frac{1}{\mathcal{Z}[0]}\frac{\partial^n\mathcal{Z}}{\prod_k \partial J[x_k]}|_{J=0} $$ Then, by definition, the $n$-point function are the $n^{th}$ moments of the Gibbs measure.

We can see by definition that the transition amplitude is the 2nd moment of the Gibbs measure. Thus, the propagator is a 2pt function

----Green Functions are 2pt correlation fxns----

As stated the Green fxn is a free field limit of the propagator. But this case is analytically solvable so rather than just giving an argument we can show for the free scalar field that the 2-pt function is its Green fxn.

In "QFT in a NutShell" CH 1.3, Zee shows that for a free field the generating functional can be written $$Z[J] = Z[J=0] e^{\frac{i}{2} \iint d^4x' d^4y' J(x') G_F(x'-y')J(y')}$$ Taking the functional derivative \begin{align} \frac{-1}{Z[0]}\frac{\delta^2 Z[J]}{\delta J(x) \delta J(y)} \big\vert_{j=0} &= \frac{-1}{2Z[0]}\frac{\delta}{\delta J(x)} \left( Z[j] \left( \int d^4y' G_F(y'-y) J(y') + \int d^4x' J(x') G_F(x'-y) \right) \right) \big\vert_{j=0} \\ &= \frac{1}{2Z[0]} \left( Z[J] \times 2 G_F(x-y) \right) \big\vert_{j=0} \\ &= G_F(x-y) \end{align} Thus we arrive at the previous stated claim that for the Free Field the propagator yields the Green fxn. Since the green function is the propagator for a free field and all propagators are 2pt fxns then.... (drum roll please)... All Green fxns are 2pt fxns.

----A connection between propagators, green fxns, and linear response fxns----

We could have short cut all these derivations and simply done a Volterra expansion (like a Taylor expansion but with convolutions instead of derivatives - https://en.wikipedia.org/wiki/Volterra_series#Continuous_time). To linear order the Volterra expansion is... you guessed it! $$\left< J(t) \right> \approx \left< J(t) \right>_{eq} + \int_{t_0}^t dt' \phi(t') \chi (t'- t) $$ Note that we have truncated our non-linear Volterra expansion at linear order so we choose to have a linear system for which Green function approaches could solve. To beat a dead horse: It says on the wiki page for green functions "If the operator is translation invariant then the Green's function can be taken to be a convolution operator. In this case, the Green's function is the same as the impulse response of linear time-invariant system theory."

Furthermore, the source term, $\phi(t)$ in my perturbation, $V_I(t)$, is equivalent to the "driving force" that @josh refers to as $\rho$. From this Volterra series vantage point you can see how our answers are connected.

If you want to consider non-linear interactions, then you can't truncate your Voltarre series at first order and your response kernels become non-linear. The whole system is no longer solvable with a measly Green function! You'll need higher order Feynman diagrams with loops and vertices and all that garbage.

---------------CITATIONS---------------------------

https://ocw.mit.edu/courses/physics/8-324-relativistic-quantum-field-theory-ii-fall-2010/lecture-notes/MIT8_324F10_Lecture7.pdf

David Tong "Kinetic Theory lecture notes" http://www.damtp.cam.ac.uk/user/tong/kinetic.html

David Tong "QFT lecture Notes" http://www.damtp.cam.ac.uk/user/tong/qft.html

Gale Kapusta "Finite Temperature F.T."

Le Bellac "Thermal F.T."

Peskin $\&$ Schroder "Intro to Q.F.T."

Huang "Operators to Path Integral"

Zee "Q.F.T. in a Nutshell"

Itzykson Zuber "Intro to Q.F.T."


josh's answer is good, but I think there are two points that require clarification.

First, his sentence defining the kernel makes no sense, because as written the dummy limit variable appears on both sides of the equation. In this context, we need to distinguish between a single "time-type" dependent variable $t$ and the other "space-type" dependent variables ${\bf x}$, which are treated inequivalently. (I'm not using the terms "timelike" or "spacelike" to avoid confusion with special relativity, as this distinction can apply whether or not the PDE is Lorentz invariant.)

The correct statement is "The kernel is a solution of the homogeneous equation $L_{{\bf x}, t}\, K({\bf x}, t; {\bf x}', t') = 0$, subject to a Dirichlet boundary condition [in time] $K({\bf x}, t; {\bf x}', t) = \delta^d({\bf x} - {\bf x}')$ or a Neumann boundary condition $\partial_t K({\bf x}, t; {\bf x}', t) = \delta^d({\bf x} - {\bf x}')$, where $d$ is the number of spatial dimensions."

Also, I think it's misleading to bold the word "linear" only when discussing the Green's function, because that seems to imply that the linearity is important for distinguishing the Green's function and the kernel. In fact, the kernel is also used to solve linear differential equations. I would say the primary difference between their use cases is that the Green's function is used to solve inhomogeneous differential equations, and the kernel is used to solve homogeneous boundary value problems. (For inhomogeneous boundary value problems, the idea of the kernel is effectively subsumed into the process of choosing the Green's function to get the boundary conditions right.)