What is the expected length of the largest run of heads if we make 1,000 flips?
Some intuition about what you can expect can be found here: Longest Run of Heads.
Let the random variable $L_n$ be the largest contiguous heads sequence in $n$ coin tosses (suppose the coin is biased with heads probability $p$).
In the paper you can find the following intuitive approximation to the expectation of $L_n$: denote by $N_k$ the expected number of heads sequences having length$\ge k$. Since each tails outcome is a possible beginning of a heads sequence(ignoring edges), the expected number of heads sequences with length$\ge 1$ is $N_1\approx n(1-p)p$. Similarly, for length$\ge 2$ sequences the expectation is $N_2\approx n(1-p)p^2$ and generally $N_k\approx n(1-p)p^k$. Now you can approximate the expectation of $L_n$ by the solution to $N_k=1$ and this yields: $L_n$
$\langle L_n\rangle\approx -\log_pn(1-p)=\log_\frac{1}{p}n(1-p)$.
Although this appears extremely loose, it gives you an idea about the asymptotic behaviour $\langle L_n\rangle$ (logarithmic growth).
More accurately, you have $\frac{L_n}{\log_\frac{1}{p}n} \rightarrow 1$ in probability, i.e. $\forall \epsilon>0 \lim\limits_{n\to\infty}\mathbb{P} \left(\left|\frac{L_n}{\log_{1/p}n}-1\right|>\epsilon \right)=0$.
You may want to look at the following plot I got a while back. For $1\le n\le 1000$ $\langle L_n\rangle$ is calculated using $1000$ trails of $n$ unbiased coin tosses:
Note: Here I'd like to present some information of an instructive and detailed treatment of this problem provided in Analytic Combinatorics by P. Flajolet and R. Sedgewick.
At first the authors derive a generating function very similar to the answer of @CountIblis. But the exciting thing (for me) is, that they also provide a rather detailed in depth analysis to find the asymptotic behaviour. It's great to see the interplay of many different facts and techniques in order to derive a precise asymptotic estimation.
Generating function:
We consider words (or strings) over a binary alphabet $\mathcal{A}=\{0,1\}$. A sequence $\mathop{SEQ}(0)=\{\varepsilon, 0,00,000,\ldots\}$ contains the set of all finite words containing $0$'s together with the empty word $\varepsilon$. The corresponding generating function is $\frac{1}{1-z}=1+z+z^2+\ldots$, the power of $z^n$ indicating the word with $n$ zeros and the coefficient $1$ giving the number of occurrences of this word.
All binary words can be characterized by all words starting with zero or more $0$'s followed by zero or more blocks each starting with a $1$ and followed by zero or more $0$'s. In symbols and as generating function we derive \begin{align*} \mathcal{W}\cong \mathop{SEQ}(0)\mathop{SEQ}(1\mathop{SEQ}(0))\qquad\Longrightarrow\qquad W(z)=\frac{1}{1-z}\frac{1}{1-z\frac{1}{1-z}}\tag{1} \end{align*} Observe that $W(z)$ reduces to $W(z)=\frac{1}{1-2z}=1+2z+4z^2+\ldots$ reflecting the fact that there are $2^n$ binary words of length $n$. The representation (1) is convenient for further development.
$$ $$
Considering longest runs of zeros, we denote with ${\mathop{SEQ}}^{<k}(0)$ the set of binary words with length less than $k$. The corresponding generating function is $\frac{1-z^{k}}{1-z}$. With respect to (1) all binary words having longest run of zeros less than $k$ are \begin{align*} \mathcal{W}^{\langle k\rangle}\cong {\mathop{SEQ}}^{<k}(0)\mathop{SEQ}(1{\mathop{SEQ}}^{<k}(0))\qquad\Longrightarrow \end{align*} \begin{align*}\qquad W(z)^{\langle k\rangle}&=\frac{1-z^k}{1-z}\frac{1}{1-z\frac{1}{1-z^k}}\tag{2}\\ &=\frac{1-z^k}{1-2z+z^{k+1}}\tag{3} \end{align*}
In the following asymptotic analysis we will use both representations (2) and (3).
The derivation of the generating function was the purely formal aspect without any needs for analytical means. It can be found in section I.4.1 of the book. But now we carry on on a somewhat deeper level. The treatment of this problem is part of chapter V: Applications of rational and meromorphic asymptotics.
Expectation value of longest run: We claim according to Proposition V.1
The longest run parameter $L$ taken over the set of binary words of length $n$ (endowed with the uniform distribution) satisfies the uniform estimate
\begin{align*} \mathbb{E}_n(L)=\lg(n)+\frac{\gamma}{\log(2)}-\frac{3}{2}+P(\lg n)+O\left(\frac{\log^2n}{\sqrt{n}}\right)\tag{4} \end{align*} where $P$ is a continuous periodic function whose Fourier expansion is given by \begin{align*} P(\omega)=-\frac{1}{\log 2}\sum_{k\in \mathbb{Z}\backslash\{0\}}\Gamma\left(\frac{2ik\pi}{\log 2}\right)e^{-2ik\pi\omega}\tag{5} \end{align*}
The oscillating function $P(\omega)$ is found to have tiny fluctuations, of the order of $10^{-6}$.
The symbol $\lg(n)\equiv\log_2(n)$ is the binary logarithm and $\gamma$ is the Euler-Mascheroni constant
Note: The expression (4) is a good approximation. We obtain for $n=1000$ the expectation value \begin{align*} \mathbb{E}_{1000}(L)=\lg(1000)+\frac{\gamma}{\log(2)}-\frac{3}{2}\doteq 9.29853\tag{6} \end{align*} and the first Fourier coefficient has amplitude: \begin{align*}\frac{\left|\Gamma\left(\frac{2i\pi}{\log 2}\right)\right|}{\log 2}\doteq 7.86\cdot 10^{-7}\end{align*} and since the fluctuations of the Fourier expansion is of order $10^{-6}$, the value stated at (6) is rather accurate.
Note, that the value $9.29853$ at $n=1000$ nicely matches with the $graph$ presented in the answer from @Ariel and the first order approximation $\lg(n)$ is the one provided in the answer by @leonbloy.
How to find the asymptotics:
The authors provide a detailed proof subdivded in four steps:
locate the dominant pole
estimate the corresponding contribution
separate the dominant pole from the other poles in order to derive constructive error terms
finally approximate the main quantitites of interest
$$ $$
Step 1: Locate the dominant pole
The dominant pole is the one nearest to the origin, responsible for the main contribution of the asymptotic approximation. By the first form (2) of the OGF $\mathcal{W}^{\langle k\rangle}$ the dominant pole $\rho_k$ is a root of the equation \begin{align*} 1=s(\rho_k)\qquad\text{where}\qquad s(z)=\frac{z(1-z^k)}{1-z} \end{align*} We consider $k>2$. Since $s(z)$ is an increasing polynomial for $z\geq 0$, we conclude from $s(0)=0, s(\frac{1}{2})<1, s(1)=k$, the root $\rho_k$ must lie in the open interval $\left(\frac{1}{2},1\right)$. In fact it's easy to verify that $s(0.6)>1$, hence the first estimate \begin{align*} \frac{1}{2}<\rho_k<\frac{3}{5}\qquad\qquad(k>2)\tag{7} \end{align*} We can now derive precise estimates of $\rho_k$ by bootstrapping, an iteration technique for approaching a fixed point. We write $1=s(z)$ in the form \begin{align*} z=\frac{1}{2}\left(1+z^{k+1}\right) \end{align*} and by using (7) we derive \begin{align*} \frac{1}{2}\left(1+{\left(\frac{1}{2}\right)}^{k+1}\right)<\rho_k <\frac{1}{2}\left(1+{\left(\frac{3}{5}\right)}^{k+1}\right) \end{align*} Thus, $\rho_k$ is exponentially close to $\frac{1}{2}$ and further iteration yields \begin{align*} \rho_k=\frac{1}{2}+\frac{1}{2^{k+2}}+O\left(\frac{k}{2^{2k}}\right)\tag{8} \end{align*}
Note, that $\rho_k$ corresponds to the calculation of the pole $p$ with $n=k+1$ from the answer of @CountIblis.
Step 2: Contribution from the dominant pole
The (expected) main contribution of the asymptotic estimation is the contribution from the dominant pole $\rho_k$. We calculate the residue of $W(z)^{\langle k\rangle}(z)=\frac{g(z)}{h(z)}$ and obtain \begin{align*} R_{n,k}&:=-\mathop{Res}\left[\frac{W(z)^{\langle k\rangle}(z)}{z^{n+1}};z=\rho_k\right]=-\frac{g(\rho_k)}{h^{\prime}(\rho_k)}\rho_k^{-n-1}\\ &=\frac{1-\rho_k^k}{2-(k+1)\rho_k^k}\rho_k^{-n-1} \end{align*}
This quantitiy is expected to provide the main approximation to the coefficients of $W(z)^{\langle k\rangle}(z)$ as $n\rightarrow \infty$.
Step 3: Separation from the subdominant poles
Next we want to show that $\rho_k$ is the only zero of the denominator of $W(z)^{\langle k\rangle}(z)$ within the disc $|z|\leq \frac{3}{4}$. We apply Rouché's theorem and regard the denominator of $W(z)^{\langle k\rangle}(z)$ \begin{align*} 1-2z+z^{k+1} \end{align*} as sum of two functions $f(x)=1-2z$ and $g(z)=z^{k+1}$. The term $f(z)$ has on the circle $|z|=\frac{3}{4}$ a modulus that varies between $\frac{1}{2}$ and $\frac{5}{2}$, while the term $g(z)\leq \frac{27}{64}$ for any $k\geq 2$.
Thus on the circle $|z|=\frac{3}{4}$, one has $|g(z)|<|f(z)|$, so that $f(z)$ and $f(z)+g(z)$ have the same number of zeros inside the circle. Since $f(z)$ admits $z=\frac{1}{2}$ as only zero there, the denominator must also have a unique root in $|z|\leq \frac{3}{4}$, and that root must coincide with $\rho_k$.
Similar arguments are further applied to derive a representation $q_{n,k}$ of the probability that the longest run in a random word of length $n$ is less than $k$. The following main estimate is obtained for $k>2$
\begin{align*} q_{n,k}:=\mathbb{P}_n(L<k)=\frac{1-\rho_k^k}{1-(k+1)\frac{\rho_k^k}{2}}\left(\frac{1}{2\rho_k}\right)^{n+1} +O\left(\left(\frac{2}{3}\right)^n\right)\tag{9} \end{align*} which holds uniformly with respect to $k$.
Step 4: Final approximations
Note: I present this last step without going into details (which are partly beyond my skills). Remarkable is (at least for me) the kind of art and mastery to find an appropriate partitioning of the domain in order to identify tail regions and central regions, the latter of them relevant for the main approximation. I would appreciate if someone could provide fruitful hints regarding this theme.
The first part of this step is to estimate the tail inequalities comprising the regions which do not contribute to the main approximation.
\begin{align*} \mathbb{P}_n\left(L<\frac{3}{4}\lg n\right)=O\left(e^{-\frac{1}{2}\sqrt[4]{n}}\right)\qquad\qquad \mathbb{P}_n\left(L\geq 2\lg n+y\right)=O\left(\frac{e^{-2y}}{n}\right)\tag{10} \end{align*}
They are derived by applying the representation (8) of $\rho_k$ in the main approximation (9). Thus, for asymptotic purposes, only a relative small region around $\ln n$ needs to be considered.
Regarding the central regime, for $k=\lg n+x$ and $x\in[-\frac{1}{4}\lg n,\lg n]$, again with the approximation (8) of $\rho_{k}$ and related quantities, one finds \begin{align*} (2\rho_k)^{-n}=\exp\left(-\frac{n}{2^{k+1}}+O(kn2^{-2k})\right) =e^{-\frac{n}{2^{k+1}}}\left(1+O\left(\frac{\log n}{\sqrt{n}}\right)\right) \end{align*} and it follows (uniformly in $k$) \begin{align*} q_{n,k}=e^{-\frac{n}{2^{k+1}}}\left(1+O\left(\frac{\log n}{\sqrt{n}}\right)\right)\tag{11} \end{align*}
The following mean estimate is derived from the fact that the distribution quickly decays at values away from $\lg n$ by (10) while it satisfies equation (11) in the central region. \begin{align*} \mathbb{E}_n(L):=\sum_{h\geq 1}[1-\mathbb{P}_n(L<h)]=\Phi\left(\frac{n}{2}\right)-1+O\left(\frac{\log ^2 n}{n}\right) \end{align*} with \begin{align*} \Phi(x):=\sum_{h\geq 0}\left[1-e^{-\frac{x}{2^h}}\right] \end{align*} We consider the three cases $h<h_0,h\in[h_0,h_1]$, and $h>h_1$, with $h_0=\lg x-\lg \lg x$ and $h_1=\lg x+\lg \lg x$, where the general term is (respectively) close to $1$, between $0$ and $1$, and close to $0$. By summing, one finds elementarily \begin{align*} \Phi(x)=\lg x+O(\lg \lg x) \qquad \qquad x\rightarrow\infty \end{align*}
According to the authors the method of choice for precise asymptotics is to treat $\Phi(x)$ as a harmonic sum and apply Mellin transform techniques. The Mellin transform of $\Phi(x)$ is \begin{align*} \Phi^{\star}(s):=\int_0^{\infty}\Phi(x)x^{s-1}dx=\frac{\Gamma(s)}{1-2^s}\qquad\qquad \mathcal{R}(s)\in (-1,0) \end{align*}
They finally conclude:
The double pole of $\Phi^{\star}$ at $0$ and the simple poles at $s=\frac{2ik\pi}{\log 2}$ are reflected by an asymptotic expansion that involves a Fourier series \begin{align*} \Phi(x)=\lg x+\frac{\gamma}{\log 2}+\frac{1}{2}+P(\lg x)+O(x^{-1}) \end{align*} with $P(\omega)$ as stated in (5). \begin{align*} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\Box \end{align*}
If you can calculate the probability of getting no more than $k$ heads in a row, call it $P_k$, then the probability that the maximum number of heads in a row is exactly $k$ is $P_k - P_{k-1}$. Then your expectation is
$$\sum_{k=1}^{1000} k(P_k - P_{k-1}) = 1000P_{1000} - \sum_{k=0}^{999} P_k$$
So it suffices to calculate all the $P_k$. Fortunately there is an easy way to do this with a recurrence on the total number of flips $n$. Let $Q_{k,m}(n)$ be the number of ways to have $n$ flips so that there are no more than $k$ heads in a row, and the flips end with a sequence of $m$ heads in a row. Then
$$Q_{k,0}(n+1) = \sum_{m=0}^k Q_{k,m}(n)$$
and
$$Q_{k,m}(n+1) = Q_{k,m-1}(n)$$ for all $m$ between $1$ and $k$, and your desired $P_k$ (as a function of $n = 1000$) is $P_k = (1/2^{1000}) \sum_{m=0}^k Q_{k,m}(1000)$.
You can calculate the $Q$ values in order of increasing $n$ up to 1000, fairly quickly, and then $P_k$ is as described. With a computer program code this would compute all $P_k$, and hence your expectation, in a fraction of a second.