How does one motivate the analytic continuation of the Riemann zeta function?
You do not try to motivate it! Even Riemann didn't see a nice argument right away.
Riemann's first proof of the functional equation used a contour integral and led him to a yucky functional equation expressing $\zeta(1-s)$ in terms of $\zeta(s)$ multiplied by things like $\Gamma(s)$ and $\cos(\pi{s}/2)$. Only after finding this functional equation did Riemann observe that the functional equation he found could be expressed more elegantly as $Z(1-s) = Z(s)$, where $Z(s) = \pi^{-s/2}\Gamma(s/2)\zeta(s)$. Then he gave a proof of this more symmetric functional equation for $Z(s)$ using a transformation formula for the theta-function $\theta(t) = \sum_{n \in {\mathbf Z}} e^{-\pi{n^2}t},$ which is $$\theta(1/t) = \sqrt{t}\theta(t).$$ In a sense that transformation formula for $\theta(t)$ is equivalent to the functional equation for $Z(s)$. The transformation formula for $\theta(t)$ is itself a consequence of the Poisson summation formula and also reflects the fact that $\theta(t)$ is a modular form of weight 1/2.
Instead of trying to motivate the technique of analytically continuing the Riemann zeta-function I think it's more important to emphasize what concepts are needed to prove it: Poisson summation and a connection with modular forms (for $\theta(t)$). These ideas are needed for the analytic continuation of most other Dirichlet series with Euler product which generalize $\zeta(s)$, so an awareness that the method of Riemann continues to work in other cases by suitably jazzing up Poisson summation and a link to modular forms will leave the reader/student with an appreciation for what goes into the proof.
This proof is not intuitive and I think it's a good illustration of von Neumann's comment that in mathematics we don't understand things, but rather we just get used to them.
(1) Titchmarsh points out in his book on the zeta function (section 2.3) that if you blindly apply the Poisson summation formula to the function $f(s)=|x|^s$, you get the functional equation of the Riemann zeta function immediately, and gives a reference to a paper of Mordell where this procedure is justified.
(2) However, if I had to motivate this in a (introductory graduate) class, with students knowledgeable about complex analysis, I would do as follows:
-- We want to count prime up to $x$ (after all, this is what Riemann had in mind);
-- Writing the number of primes as a sum of some function $g(n)$ for $n$ up to $x$, it is fairly natural (though perhaps with hindsight) to use harmonic analysis to go further; it is here also rather natural to use characters of the positive reals, and hence one gets a Mellin integral on a vertical line with large enough real part ($Re(s)=\sigma>1$, say), where the dependency on $x$ is entirely in $x^s$, and the logarithmic derivative of the zeta function comes out;
-- A naive attempt to estimate from this fails, since $|x^s|=x^{\sigma}$ is worse than the trivial bound for the number of primes up to $x$; it is therefore natural to try to move the contour of integration to the left to reduce the size of $\sigma$;
-- Hence whether something can be done along these lines is immediately related to the possibility of performing analytic continuation of the (logarithmic derivative) of the zeta function;
-- In Riemann's time, I have the feeling that people like him would then just shrug and say, "Ok, let's continue $\zeta(s)$ as much as we can", and would find a way of doing so. Most attempts succeed at also getting the functional equation, and it becomes natural to try to understand the latter. But there are other problems where the basic strategy is the same, and one gets a limited range of analytic continuation (and there is no functional equation), while still being able to get an asymptotic formula for the sum of original interest. [Indeed, in some sense, this is what happened to Riemann: to actually count primes, one needs to continue the logarithmic derivative, and if the zeros of $\zeta(s)$ are badly located, the strategy fails -- and he did not prove the Prime Number Theorem.]
Riemann's analytic continuations and derivation of the functional equations for $\zeta$ and $\xi$ seem quite natural and intuitive from the perspective of basic complex analysis.
Riemann in the second equation of his classic paper On the Number of Prime Numbers less than a Given Quantity (1859) writes down the Laplace (1749-1827) transform
$$\int_{0}^{+\infty}e^{-nx}x^{s-1}dx=\frac{(s-1)!}{n^s},$$ valid for $real(s)>0.$
With $n=1$ this is the iconic Euler (1707-1783) integral representation of the gamma function, and noting that
$(s-1)!=\frac{\pi}{sin(\pi s)}\frac{1}{(-s)!}$ from the symmetric relation $\frac{sin(\pi s)}{\pi s}=\frac{1}{s!(-s)!},$
this can be rewritten as
$$\frac{sin(\pi s)}{\pi}\int_{0}^{\infty}e^{-x}x^{s-1}dx=\frac{1}{(-s)!},$$
suggesting quite naturally to someone as familiar with analytic continuation as Riemann that
$$\frac{-1}{2\pi i}\int_{+\infty}^{+\infty}e^{-x}(-x)^{s-1}dx=\frac{1}{(-s)!},$$
valid for all $s$, where the line integral is blown up around the positive real axis into the complex plane to sandwich it with a branch cut for $x>0$ and to loop the origin in the positive sense from positive infinity to positive infinity. Deflating the contour back to the real axis introduces a $-exp(i\pi s)+exp(-i\pi s)=-2isin(\pi s)$. (This special contour is now called the Hankel contour after Hermann Hankel (1839-1873), who became a student of Riemann in 1860 and published this integral for the reciprocal gamma fct. in his habilitation of 1863. Most likely Riemann introduced him to this maneuver.)
Riemann in his third equation observes that
$$(s-1)!\zeta(s)=(s-1)!\sum_{n=1}^{\infty }\frac{1}{n^s}=\int_{0}^{+\infty}\sum_{n=1}^{\infty }e^{-nx}x^{s-1}dx=\int_{0}^{+\infty}\frac{1}{e^x-1}x^{s-1}dx$$
and then immediately writes down as his fourth equality the analytic continuation
$$2sin(\pi s)(s-1)!\zeta(s)=i\int_{+\infty}^{+\infty}\frac{(-x)^{s-1}}{e^x-1}dx,$$
valid for all $s$, which can be rewritten as
$$\frac{\zeta(s)}{(-s)!}=\frac{-1}{2\pi i}\int_{+\infty}^{+\infty}\frac{(-x)^{s-1}}{e^x-1}dx$$
as naturally suggested by the analytic continuation of the reciprocal of gamma above.
For $m=0,1,2, ...,$ this gives
$$\zeta(-m)=\frac{(-1)^{m}}{2\pi i}\oint_{|z|=1}\frac{m!}{z^{m+1}}\frac{1}{e^z-1}dz=\frac{(-1)^{m}}{m+1}\frac{1}{2\pi i}\oint_{|z|=1}\frac{(m+1)!}{z^{m+2}}\frac{z}{e^z-1}dz$$
from which you can see, if you are familiar with the exponential generating fct. (e.g.f.) for the Bernoulli numbers, that the integral vanishes for even $m$. Euler published the e.g.f. in 1740 (MSE-Q144436), and Riemann certainly was familiar with these numbers and states that his fourth equality implies the vanishing of $\zeta(s)$ for $m$ even (but gives no explicit proof). He certainly was also aware of Euler's heuristic functional eqn. for integer values of the zeta fct., and Edwards in Riemann's Zeta Function (pg. 12, Dover ed.) even speculates that ".. it may well have been this problem of deriving (2) [Euler's formula for $\zeta(2n)$ for positive $n$] anew which led Riemann to the discovery of the functional equation ...."
Riemann then proceeds to derive the functional eqn. for zeta from his equality by using the singularities of $\frac{1}{e^z-1}$ to obtain basically
$$\zeta(s)=2(2\pi)^{s-1}\Gamma(1-s)\sin(\tfrac12\pi s)\zeta(1-s),$$
and says three lines later essentially that it may be expressed symmetrically about $s=1/2$ as
$$\xi(s) = \pi^{-s/2}\ \Gamma\left(\frac{s}{2}\right)\ \zeta(s)=\xi(1-s).$$
Riemann then says, "This property of the function [$\xi(s)=\xi(1-s)$] induced me to introduce, in place of $(s-1)!$, the integral $(s/2-1)!$ into the general term of the series $\sum \frac{1}{n^s}$, whereby one obtains a very convenient expression for the function $\zeta(s)$." And then he proceeds to introduce what Edwards calls a second proof of the functional eqn. using the Jacobi theta function.
Edwards wonders:
"Since the second proof renders the first proof wholly unnecessary, one may ask why Riemann included the first proof at all. Perhaps the first proof shows the argument by which he originally discovered the functional equation or perhaps it exhibits some properties which were important in his understanding of it."
I wonder whether, as his ideas evolved before he wrote the paper, he first constructed $\xi(s)$ by noticing that multiplying $\zeta(s)$ by $\Gamma(\frac{s}{2})$ introduces a simple pole at $s=0$ thereby reflecting the pole of $\zeta(s)$ at $s=1$ through the line $s=1/2$ and that the other simple poles of $\Gamma(\frac{s}{2})$ are removed by the zeros on the real line of the zeta function. The $\pi^{-s/2}$ can easily be determined as a normalization by an entire function $c^s$ where $c$ is a constant, using the complex conjugate symmetry of the gamma and zeta fct. about the real axis. Riemann had fine physical intuition and would have thought holistically in terms of the the zeros of a function (see Euler's proof of the Basel problem) and its poles, the importance of which he certainly stressed.
Let's extend the reasoning above for the Jacobi theta function
$$\vartheta (0;ix^2)=\sum_{n=-\infty,}^{\infty }exp(-\pi n^{2}x^2).$$
Viewing a modified Mellin transform as an interpolation of Taylor series coefficients (MO-Q79868), it's easy to guess (note the zeros of the coefficients) that
$$\int^{\infty}_{0}\exp(-x^2)\frac{x^{s-1}}{(s-1)!} dx = \cos(\pi\frac{ s}{2})\frac{(-s)!}{(-\frac{s}{2})!} = \frac{1}{2}\frac{(\frac{s}{2}-1)!}{(s-1)!},$$
and, therefore,
$$\int^{\infty}_{0}\exp(-\pi (n x)^2)x^{s-1} dx = \frac{1}{2}\pi^{-s/2}(\frac{s}{2}-1)! \frac{1}{n^s}.$$
By now you should be able to complete the line of reasoning to obtain, for $real(s)>1,$
$$\xi(s)=\int_{0^+}^{\infty }[\vartheta (0;ix^2)-1)]x^{s-1}dx=\pi^{-s/2}\ \Gamma\left(\frac{s}{2}\right)\ \zeta(s).$$
Do an analytic continuation as done for the gamma function in MSE-Q13956 to obtain, for 0<real(s)<1,
$$\xi(s)=\int_{0^+}^{\infty }[\vartheta (0;ix^2)-(1+\frac{1}{x})]x^{s-1}dx.$$
Then use symmetries of the Mellin transform and the fact that $\xi(s)=\xi(1-s)$ (as explained in MSE-Q28737) to obtain the functional equation
$$\vartheta (0;ix^2)=\frac{1}{x}\vartheta (0;\frac{i}{x^{2}}).$$
Update (Jan. 5, 2021):
This perhaps answers Edwards' question and provides a simple path to the functional identity.
Euler initially acquired a reputation with solving the Basel problem to establish the value of $\zeta(2)$ and furthermore the identities
$$\frac{2}{(2\pi)^{2n}}\:(2n-1)!\:\zeta(2n)=(-1)^{n+1}\frac{B_{2n}}{2n}=(-1)^{n}\zeta(1-2n).$$
As noted above, Riemann incorporated the e.g.f. for the Bernoulli numbers in his normalized Mellin/Laplace transform for the zeta function. It's not a great leap of faith to believe that Riemann (of the eponymous surfaces and derivative) grasped the interpolating property of the Mellin transform. Given that assumption, he could have easily noted and perhaps initially formulated the Mellin integral representation from
$$b_n = D^m_{z=0} \; e^{b.z} = D^m_{z=0} \; \frac{z}{e^z-1}$$
$$ = (-1)^{m} \; \frac{1}{2\pi i}\oint_{|z|=1}\frac{m!}{z^{m+1}} \; \frac{z}{e^z-1} \; dz $$
$$= (-1)^{m} \; \frac{1}{2\pi i}\oint_{|z|=1}\frac{m!}{z^{m+1}} \; e^{b.z} \; dz$$
$$ =(-1)^{m} \frac{1}{2\pi i} \; \oint_{|z|=1}\frac{m!}{z^{m+1}} \; [1 - \frac{z}{2}+ \sum_{n \geq 2} \; \cos(\frac{\pi n}{2}) (-2) \; (2\pi)^{-n} \; n! \; \zeta(n) \; \frac{z^n}{n!}] \; dz $$
$$ =(-1)^{m} \frac{1}{2\pi i} \; \oint_{|z|=1}\frac{m!}{z^{m+1}} \; [1 - \frac{z}{2}+ \sum_{n \geq 2} \; (-n) \;\zeta(1-n) \; \frac{z^n}{n!}] \; dz . $$
The Hankel deformation maneuvers above could be reversed to obtain the Mellin transforms equivalent to these contour integrals (see this MO_A), but this would not change the equivalence of the coefficients in the two e.g.f.s for the Bernoulli numbers, so from the interpolating property of the Mellin transform (and therefore the Cauchy integrals), Riemann could have simply analytically continued $n$ to $1-s$ to surmise the target identity
$$ \cos(\frac{\pi n}{2}) \; 2 \; (2\pi)^{-n} \; n! \; \zeta(n)] \; |_{n \to 1-s} = n \;\zeta(1-n) \; |_{n \to 1-s},$$
giving the functional reflection identity
$$\cos(\frac{\pi (1-s)}{2}) \; 2 \; (2\pi)^{s-1} \; (1-s)! \; \zeta(1-s) = (1-s) \;\zeta(s) , $$
or
$$2 \; (2\pi)^{s-1} \; \sin(\frac{\pi s}{2}) \; (-s)! \; \zeta(1-s) = \zeta(s) . $$
Edit 1/24/21: A fairly simple fourth way to derive the functional equation involving Fourier series and the core Poisson summation distribution, yet avoiding the theta function, is given in my answer to this recent MO-Q.