What is the theme of analysis?
I think that I'd say that one of the underlying themes of analysis is, really, the limit. In pretty much every subfield of analysis, we spend a lot of time trying to control the size of certain quantities, with taking limits in mind. This is especially true in PDEs, when we consistently desire norm estimates on various quantities. Let's just discuss the "basics" of the "basic" subjects (standard topics in real, complex, measure theory, functional). I'm going to keep this discussion loose, since we could quickly get into a very drawn-out and detailed discussion.
Real analysis is built on limits. Continuity, differentiability, integration, series, etc. all require the concept of limits. Complex analysis has a bit of a different "flavor" than the other core types, but it still requires limits to do pretty much everything. By Goursat's theorem, holomorphicity is equivalent to complex differentiability over a neighborhood- limits. Integration (and all that comes with it) and residue theory- limits. We can continue this for the entire subject (Laurent series, normal families, conformity, etc.). Lebesgue integration theory and the powerful theorems that come with it are primarily centered around, essentially, swapping limits, and much of measure theory is built around this. In functional analysis, we certainly have times where we can run into not metrizable (or even Hausdorff) topologies, but limits are still central. Many of the common types of spaces like Hilbert, Banach, and Frechet spaces all make use of a metric. We have things like the uniform boundedness principle, compact operators, spectral theory, semigroups, Fourier analysis (this is a field in its own right, of course, but it deals with a lot of functional analysis), and much more, all of which deal with limits (either explicitly or via objects related to previously-discussed material). A significant subfield of analysis is PDEs. As I said earlier, PDEs often deals with obtaining proper norm estimates on certain quantities in appropriate function spaces to prove e.g. existence and regularity of solutions, once again highly dependent on limit arguments (and, of course, the norms themselves are limit-dependent).
Something else that I didn't touch on, but is important to discuss, is just how many modes of convergence we use. Some common types of convergence of sequences of functions and operators are pointwise convergence, uniform convergence, local uniform convergence, almost everywhere convergence, convergence in measure, $L^p$ convergence, (more generally) convergence in norm, weak convergence, weak star convergence, uniform operator convergence, strong operator convergence, weak operator convergence, etc. I didn't distinguish between convergence for operators and functions too much here, but it is important to do so e.g. weak star convergence is pointwise convergence for elements of the dual, but I listed them as separate.
EDIT: The OP asked for some details. Of course, writing everything above in details would amount to me writing books! Instead of talking about everything, I'd like to talk about one pervasive concept in analysis that comes from limits- the integral. I'd like to note that much of the post deals with limits in many other ways as well, explicitly or otherwise. In real analysis, there are various equivalent ways that the integral is defined, but I'd like to use Riemann sums here: we say that a function is Riemann integrable on an interval $[a,b]$ if and only if there exists $I\in \mathbb{R}$ so that
$$I=\lim_{\|P\|\rightarrow 0}\sum\limits_{i=1}^n f(t_i)(x_i-x_{i-1}),$$
where $\|P\|$ denotes the size of the partition, where $t_i\in [x_{i-1},x_i].$ We call $I$ the integral, and we denote it as $$I=\int\limits_a^b f(s)ds.$$ The integral of a continuous (limits!) function is related to the derivative (limits!) through the fundamental theorem of calculus:
For $f\in C\left([a,b]\right),$ a function $F$ satisfies $$F(x)-F(a)=\int\limits_a^x f(s)ds$$ for any $x\in [a,b]$ if and only if $F'=f$.
As the name of the theorem states, this is pretty important. All of this generalizes appropriately to higher dimensions, but I won't discuss that here. Sometimes, integrating a function on its own can be hard, so we approximate it with easier functions (or sometimes, we have a sequence of functions tending to something, and we want to know about the limit and how it integrates). A major theorem in an introductory real analysis class is that if we have a sequence of Riemann integrable functions $(f_n)$ converging uniformly on $[a,b]$ to $f$, then $f$ is Riemann integrable, and we can swap the limit and integral. So, we can swap these two limits. We can do the same for a series of functions that converges uniformly.
Okay, let's move on. In complex analysis, the integral is still of importance. Integrals in the complex plane are path integrals, which can be defined similarly. Complex analysis is centered on studying holomorphic functions, and a theorem of Morera relates this to the integral
Let $g:\Omega\rightarrow\mathbb{C}$ be continuous, and $$\int\limits_\gamma g(z)dz=0$$ whenever $\gamma=\partial R$ and $R\subset\Omega$ is a rectangle (with sides parallel to the real and imaginary axes). Then, $g$ is holomorphic.
Cauchy's theorem states that the integral of a holomorphic function along a closed curve is zero. This can be used to prove the Cauchy integral formula:
If $f\in C^1(\bar{\Omega})$ is holomorphic on a bounded region $\Omega$ with smooth boundary, then for any $z\in\Omega$, we have $$f(z)=\frac{1}{2\pi i}\int\limits_{\partial\Omega}\frac{f(\zeta)}{\zeta-z}d\zeta.$$
Taking the derivative and iterating proves that holomorphic functions are smooth, and further we get that they have a power series expansion, where the coefficients given by integration. This all hinges on integration.
The integral pops up in many other fundamental ways here. One is in the form of the mean-value property, which states that $$f(z_0)=\frac{1}{2\pi}\int\limits_0^{2\pi} f(z_0+re^{i\theta})d\theta$$ whenever $f$ is holomorphic on $\Omega$ open and the closed disk centered $z_0$ of radius $r$ is contained in $\Omega.$ We use the integral to prove other important theorems, such as the maximum modulus principle, Liouville's theorem, etc. We also use it to define a branch of the complex logarithm, to define the coefficients of a Laurent series, and to count zeros and poles of functions (argument principle). We also like to calculate various types of integrals in the complex plane where the integrand has singularities (often as a trick to calculate real integrals, which is especially relevant for calculating Fourier transforms). This uses the residue theorem, and residues are also calculated via taking limits. The theorem states that $$\int\limits_{\partial\Omega}f(z)dz=2\pi i\sum_j\text{Res}_{z_j}(f),$$ where $f$ is holomorphic on an open set $\Omega$, except at singularities $\{z_j\},$ each of which has a relatively compact neighborhood on which $f$ has a Laurent series (the residue is the $(-1)$'th indexed coefficient, which are also integrals by construction of the Laurent series). I think that's enough about complex analysis.
Now, let's talk a bit about measure theory. The Riemann integral is somewhat restrictive, so we generalize it to the Lebesgue integral (I have a post about the construction, see How to calculate an integral given a measure?). Note the involvement of limits in the post. If a function is Riemann integrable, then it is equivalent to its Lebesgue integral. We can define the Lebesgue integral on any measure space. Two of the biggest theorems are the monotone and dominated convergence theorems:
If $f_j\in L^1(X,\mu)$, $0\leq f_1(x)\leq f_2(x)\leq \cdots,$ and $\|f_j\|_{L^1}\leq C<\infty,$ then $\lim_j f_j(x)=f(x),$ with $f\in L^1(X,\mu),$ and $\|f_j-f\|_{L^1}\rightarrow 0.$
and
If $f_j\in L^1(X,\mu)$ and $\lim_j f_j(x)=f(x)$ $\mu$-a.e., and there is an $F\in L^1(X,\mu)$ so that $F$ dominates each $|f_j|$ pointwise $\mu$-a.e., then $f\in L^1(X,\mu)$ and $\|f_j-f\|_{L^1}\rightarrow 0.$
We have immediate generalization to $L^p$ spaces, as well. These theorems are used extensively to prove things in measure theory, functional analysis, and PDEs. The dominated convergence theorem generalizes the result of using uniform convergence to swap limit and integral. We can use these to show that $L^p$ is complete for $p\in [1,\infty),$ in fact a Banach space, as these define norms. We show that if $p$ is in the range and $X$ is $\sigma$-finite, then the dual of $L^p$ is $L^q$, where $1/p+1/q=1,$ and this functional is defined, wait for it, via integration. We're beginning to overlap a bit with functional analysis, so I'll switch gears a bit. We often use the integral to define linear functionals, and one such example is in the Riesz representation theorem. Here, we find that the dual of $C(X)$, where $X$ is a compact metric space, is the space of finite, signed measure of the Borel sigma algebra (Radon measures). In particular, to any bounded linear function $\omega$ on $C(X)$, there exists a unique Radon measure $\rho$ such that $$\omega (f)=\int\limits_x fd\rho.$$
Also, we get a generalization of the fundamental theorem of calculus using the Hardy-Littlewood maximal function:
Let $f\in L^1(\mathbb{R}^n, dx)$ and consider $$A_rf(x)=\frac{1}{m(B_r)}\int\limits_{B_r(x)}f(y)dy,$$ where $r>0.$ Then, $$\lim_{r\rightarrow 0} A_rf(x)=f(x)$$ a.e.
In fact, if $f\in L^p$, then $$\lim_{r\rightarrow 0}\frac{1}{m(B_r)}\int\limits_{B_r(x)}|f(y)-f(x)|^p dy=0$$ for a.e. $x$.
Something of interest is swapping integrals themselves, which manifests in the theorems of Tonelli and Fubini. There are multiple versions, most of which have objects which require definitions, so I'll just give a quick version, the Fubini-Tonelli theorem for complete measures:
Let $(X,M,\mu)$ and $(Y,N,\nu)$ be complete, $\sigma$-finite measure spaces, and let $(X\times Y,\mathcal{L},\lambda)$ be the completion of $(X\times Y,M\otimes N, \mu\times\nu).$ If $f$ is $\mathcal{L}$-measurable, and $f\in L^1(X\times Y,\lambda),$ then $$\int fd\lambda=\int\int f(x,y)d\mu(x)d\nu(y)=\int\int f(x,y)d\nu(y)d\mu(x).$$
EDIT: I added in a version of Fubini/Tonelli and material related to Cauchy's theorem and the Cauchy integral formula. I am a bit busy today, but I will post about functional analysis tomorrow!
Now, we'll move on to functional analysis. This is a pretty big topic, so I'm just going to pick a few things to talk about. We already had some discussion of $L^p$ spaces, and a lot of the remaining discussion will be related to them in some manner, making all of the discussion inherently reliant on the integral. For this reason, I'm going to stop outlining exactly when we're using integrals.
Recall that $L^p$ spaces constitute Banach spaces. It is important to know that if $p=2,$ then the space becomes a Hilbert space, which is a very nice structure and makes the use of $L^2$ extremely common in PDEs. If $\mu$ is $\sigma$-finite and the $\sigma$-algebra associated to $X$ is finitely-generated, then $L^2(X,\mu)$ is separable, allowing us to get an orthonormal basis. For example, $\{e^{i n \theta}\}_{n\in\mathbb{Z}}$ is an orthonormal basis for $L^2(S^1,dx/2\pi)$ (see Fourier series).
Something that is very useful in PDEs is to define functions of operators, such as $\sqrt{-\Delta},$ where $\Delta$ is the Laplacian, and the integral arises naturally in various forms of functional calculus. One such example is the holomorphic functional calculus (I will discuss more later on). If we have a bounded linear operator $T$, a bounded set $\Omega\subset\mathbb{C}$ with smooth boundary containing the spectrum $\sigma(T)$ of $T$ in the interior, and a function $f$ holomorphic on a neighborhood of $\Omega,$ then we can define $$f(T)=\frac{1}{2\pi i}\int\limits_{\partial\Omega} f(\zeta)R_\zeta d\zeta,$$ where $R_\zeta:=(\zeta-T)^{-1}$ is called the resolvent of $T$. This is one, simple way of making rigorous the idea of functions of operators. The holomorphic functional calculus can be extremely useful, although functional calculus can be made more general (such as the Borel functional calculus).
One of the commonly-studied types of operators are integral operators, given in the form $$Ku(x)=\int\limits_X k(x,y)u(y)d\mu (y)$$ on a measure space $(X,\mu)$. These arise naturally in solving differential/integral equations. You'll notice, for example, that the fundamental solutions of some of the basic PDEs are given via convolution, in which case our solution involves integral operators. You may have heard of the Hilbert-Schmidt kernel theorem, which says the following:
If $T:L^2(X_1,\mu_1)\rightarrow L^2(X_2,\mu_2)$ is a Hilbert-Schmidt operator, then there exist $K\in L^2(X_1\times X_2, \mu_1\times \mu_2)$ so that $$(Tu,v)_{L^2}=\int\int K(x_1,x_2)u (x_1)\overline{v(x_2)}d\mu_1(x_1)d\mu_2(x_2).$$
The converse also holds: given $K\in L^2(X_1\times X_2, \mu_1\times \mu_2)$, then $T$ as written above defines a Hilbert-Schmidt operator, and it satisfies $\|T\|_{\text{HS}}=\|K\|_{L^2}.$ For a generalization to tempered distributions (which we'll talk about later), look up the Schwartz kernel theorem.
Another area of interest is semigroups, which (once again) have applications in differential equations. If we take a contraction semigroup $\{S(t)\}_{t\geq 0}$ on a real Banach space $X$ with infinitesimal generator $A$, then we have the following cool result:
If $\lambda>0,$ then $\lambda$ is in the resolvent set of $A$, denoted $\rho(A),$ and if $R_\lambda=(\lambda-A)^{-1}$ denotes the resolvent of $A$, then $$R_\lambda u=\int\limits_0^\infty e^{-\lambda t} S(t) u\ dt$$ for $u\in X,$ and $\|R_\lambda\|\leq\frac{1}{\lambda}.$
This is, we can write the resolvent as the Laplace transform of the semigroup. From here, one can prove some major results, like the Hille-Yosida theorem for contraction semigroups:
Let $A$ be a closed, densely-defined linear operator $X$. Then, $A$ is the generator of a contraction semigroup $\{S(t)\}_{t\geq 0}$ if and only if $$(0,\infty)\subset\rho(A)\hspace{.25in}\text{ and } \hspace{.25in} \|R_\lambda\|\leq\frac{1}{\lambda}$$ for $\lambda>0.$
We can generalize past contraction semigroups to get a more general version of this theorem. Let me also state a related theorem, Stone's theorem:
If $A$ is self-adjoint, then $iA$ generates the unitary group $U(t)=e^{itA}.$ Conversely, if $\{U(t)\}_{t\geq 0}$ is a semigroup of unitary operators, then there exists a self-adjoint operator $A$ so that $S(t)=e^{itA}.$
Semigroup theory has direct application to PDEs. For example, if you have a parabolic equation, say $$\partial_t u+Lu=0$$ with appropriate boundary and initial conditions, where $L$ is uniformly elliptic, then you can apply semigroup theory to this problem to guarantee a unique solution given by $S(t)u(x,0)$. This is a generalization of solving linear ODEs on finite dimensional spaces \begin{align*}x'&=Ax\\ x(0)&=x_0 \end{align*} which have the solution $x(t)=e^{tA}x_0,$ and $\{e^{tA}\}_{t\geq 0}$ generate a one-parameter semigroup.
As I mentioned earlier, Fourier analysis is certainly its own field, but it uses a lot of functional analytic techniques, and it is monumentally important in PDEs. Let us first define the Fourier transform $\mathcal{F}:L^1(\mathbb{R}^n)\rightarrow L^\infty (\mathbb{R}^n)$ by $$\mathcal{F} u(\xi)={\hat{u}}(\xi):=(2\pi)^{-n/2}\int\limits_{\mathbb{R}^n} u(x)e^{-ix\cdot \xi}\ dx.$$ We can also extend this to a bounded operator from $L^2\rightarrow L^2.$ For nice enough functions, the Fourier transform enjoys many useful properties, such as $$D_{\xi}^\alpha (\mathcal{F} u)=\mathcal{F}((-x)^\alpha u)$$ and $$\mathcal{F}(D_x^\alpha u)=\xi^\alpha \mathcal{F} u,$$ where $D^\alpha=\frac{1}{i^{|\alpha|}}\partial^\alpha$ and $\alpha$ is a multi-index. It turns out that a very natural place to define this is on the Schwartz space $\mathcal{S},$ where $\mathcal{F}$ is a topological isomorphism. We have the famed Fourier inversion formula $$u(x)=(2\pi )^{-n/2}\int\limits_{\mathbb{R}^n}\hat{u}(\xi)e^{ix\cdot \xi}\ d\xi,$$ and from here, we can get the Plancheral theorem: $$\|u\|^2_{L^2}=\|\mathcal{F}u\|^2_{L^2},$$ so $\mathcal{F}$ is an isometric isomorphism.
We can define the Fourier transform on the dual space of $\mathcal{S},$ the space of tempered distributions (denoted $\mathcal{S}'$), via duality: $$\langle\hat{u},f\rangle=\langle u,\hat{f}\rangle,$$ where $u\in\mathcal{S}'$ and $f\in\mathcal{S}.$ Due to the algebra and calculus of distributions, one can extend all of the previously-listed properties to here. Now, we might as well mention a powerful and important generalization of the Hilbert-Schmidt kernel theorem, the Schwartz kernel theorem:
Let $A:\mathcal{S}\rightarrow\mathcal{S}'$ be a continuous linear map. Then, there exists $K_A\in\mathcal{S}'(\mathbb{R}^n\times\mathbb{R}^n)$ so that for all $u,v\in\mathcal{S},$ $$\langle Au,v\rangle=\langle K_A, u\otimes v\rangle,$$ where $(u\otimes v)(x,y)=u(x)v(y)\in \mathcal{S}(\mathbb{R}^n\times\mathbb{R}^n).$
We sometimes abuse notation and write this as $Au(x)=\int K_A(x,y) u(y)\, dy,$ so that $$\langle Au,v\rangle=\iint K_A(x,y) v(x)u(y)\, dydx.$$
We can use the Fourier transform to define another functional calculus via Fourier multipliers. Motivated by the Fourier transform of the Laplacian, we define $$f(D)u=\mathcal{F}^{-1}\left(f\left(|\xi|\right)\mathcal{F}u\right),$$ for a "nice" function $f$ (by "nice," I mean so that the above makes sense). This allows us to make sense of objects like $e^{t\Delta}$ or $\cos\left(t\sqrt{-\Delta}\right).$ The former is related to the fundamental solution of the heat equation and the latter to the fundamental solution of the wave equation (notice anything related to semigroups for the former?). Fourier multipliers generalize to pseudodifferential operators (which are also a generalization of singular integral operators), but I'd rather not get into this, since this section is already very long. Fourier multipliers can be used to give a general definition of Sobolev spaces, which are of fundamental importance in PDEs, as well. We define these spaces, for any real $k$, as $$W^{k,p}(\mathbb{R}^n)=\left\lbrace u\in \mathcal{S}'(\mathbb{R}^n): \mathcal{F}^{-1}\left(\langle \xi\rangle ^{k} \mathcal{F} u\right)\in L^p(\mathbb{R}^n\right\rbrace,$$ where $\langle \xi\rangle=\left(1+|\xi|^2\right)^{1/2}.$ These are Banach spaces, and when $p=2,$ it is a Hilbert space. For this reason, we use the provocative notation $W^{k,2}=H^k.$
Something very significant that I haven't discussed yet is spectral theory, so I'll say a bit about it now. This is another field with signficant application to PDEs (and quantum mechanics). Spectral theory is directly connected to the Hilbert space $L^2$, so the integral is slightly "hidden." Here is one version of the spectral theorem, which can be proven using holomorphic functional calculus or the Fourier transform on $\mathcal{S}'$:
If $A$ is a bounded, self-adjoint operator on a Hilbert space $H$, then there exists a measure space $(X,\mathfrak{F},\mu)$, a unitary map $\Phi:H\rightarrow L^2(X,\mu),$ and $a\in L^\infty (X,\mu)$ so that $$\Phi A\Phi^{-1}f(x)=a(x)f(x)$$ for all $f\in L^2(X,\mu).$ Here, $a$ is real-valued, and $\|a\|_{L^\infty}=\|A\|.$
There is also a near-identical version for bounded unitary operators; the only difference is that $|a|=1$ on $X$. We can further extend to normal operators. This also generalizes for unbounded operators:
If $A$ is an unbounded, self-adjoint operator on a separable Hilbert space $H$, then there is a measure space $(X,\mu)$, a unitary map $\Phi:L^2(X,\mu)\rightarrow H$, and a real-valued, measurable function $a$ on $X$ such that $$\Phi^{-1}A\Phi f(x)=a(x)f(x)$$ for $\Phi f\in \mathcal{D}(A).$ If $f\in L^2(X,\mu),$ then $\Phi f\in\mathcal{D}(A)$ if and only if $af\in L^2(X,\mu).$
This givens us a new functional calculus: If $f:\mathbb{R}\rightarrow \mathbb{C}$ is Borel, then we can define $f(A)$ via $$\Phi^{-1}f(A)\Phi g(x)=f(a(x))g(x).$$ If $f$ is bounded and Borel, then we can define this for any $g\in L^2(X,\mu),$ and $f(A)$ will be a bounded operator on $H$. If not, then we can define $$\mathcal{D}(f(A))=\{\Phi g\in H: g\in L^2(X,\mu)\text{ and } f(a(x))g\in L^2(X,\mu)\}.$$
I think I'll stop here. PDEs utilizes all of the above, but since it is an application of these subjects, rather than "pure analysis," I'm going to leave that out.
TL;DR Limits are fundamental in analysis, and one such example of their use is in the definition of the integral, which is ubiquitous in the field.
References:
1. Gerald Folland: Real Analysis
2. Elias M. Stein and Rami Shakarchi: Complex Analysis
3. Michael Taylor: Introduction to Complex Analysis (link to pdf on his website: http://mtaylor.web.unc.edu/files/2018/04/complex.pdf)
4. Michael Taylor: Measure Theory and Integration
5. Michael Taylor: Partial Differential Equations I
6. Michael Taylor: Partial Differential Equations II
7. John Conway: A Course in Functional Analysis
8. Lawrence Evans: Partial Differential Equations
EDIT: I've added in functional analysis (and, arguably, went a bit overboard) and listed some references, as well as tweaked some previous statements. I apologize in advance if there are typos- this was difficult to proofread!
EDIT: I've added in a statement of the Schwartz kernel theorem, which I forgot to write before.
From my viewpoint, Real Analysis is a study of functions of (one or several) real variable. Everything else (limits, derivatives, integrals, infinite series, etc.) is a tool serving this purpose. [There is a mild exception one has to make here for sequences and series of real numbers/vectors; these are functions defined on the set of natural numbers and sometimes, integers.] The theory of real numbers and limits was developed (in the 19th century) in order to make the study of functions rigorous. Note that the theory of limits and the "epsilonetics" is not the only way to go. The alternative (at least, the only alternative I know) is the Nonstandard Analysis which justifies the notion of infinitesimally small and infinitely large quantities used by Newton Leibnitz and others for (about) the first 150 years of existence of Real Analysis (before the limits were introduced by Cauchy and, in the modern form, by Weierstrass).
For instance, what is the purpose (or, rather, purposes) of computing derivatives of functions? It is to determine if the given function is increasing/decreasing/concave/convex or to approximate the given function by some polynomial (usually a polynomial of degree one).
What is the purpose of computing limits? It is to determine "approximate" behavior of the function when the input variable is close to some (finite or infinite) value.
What is the purpose of computing integrals? It is to compute length (of curves), areas (of surfaces), volumes (of solids), or to find solutions of differential equations (which are equations on functions involving some derivatives). In the geometric problems (lengths, areas and volumes) one computes a single number "measuring" the given function (say, the length of a curve).
What is the purpose of computing Taylor (Fourier) series? It is to approximate functions with polynomials (or sums of trigonometric functions) which are (usually) easier to analyze than general smooth functions.
This is how it was from the very beginning of Real Analysis (Newton, Leibnitz, Bernoulli, Euler and many others).
Long ago someone told me this. I still remember it ...
Sometimes I find myself just pushing symbols around, and I wonder, "Am I really doing analysis?" But when an argument begins, "Let $\varepsilon > 0$," then I know it really is analysis.