Finding all roots of a polynomial
This argument is problematic; see Andrej Bauer's comment below.
Sure. I have no idea what an efficient algorithm looks like, but since you only asked whether it's possible I'll offer a terrible one.
Lemma: Let $f(z) = z^n + a_{n-1} z^{n-1} + \cdots + a_0$ be a complex polynomial and let $R = \max(1, |a_{n-1}| + \cdots + |a_0|)$. Then all the roots of $f$ lie in the circle of radius $R$ centered at the origin.
Proof. If $|z| > R$, then $|z|^n > R |z|^{n-1} \ge |a_{n-1} z^{n-1}| + \cdots + |a_0|$, so by the triangle inequality no such $z$ is a root.
Now subdivide the disk of radius $R$ into, say, a mesh of squares of side length $\varepsilon > 0$ and evaluate the polynomial at all the lattice points of the mesh. As the mesh size tends to zero you'll find points that approximate the zeroes to arbitrary accuracy.
There are also lots of specialized algorithms for finding roots of polynomials at the Wikipedia article.
Edit - Big News: Below I give an algorithm and show that it works, in a specified sense. I acknowledge that it does not work perfectly, in a sense. We have fix for that - a bit of pre-calculation and it works perfectly!
The algorithm below produces $a_1,\dots,a_k$ such that every $a_j$ is withing $\epsilon$ of a zero of $p$ and every zero of $p$ is within $\epsilon$ of some $a_j$. It's not perfect, the problem being that it can happen that several $a_j$ cluster around the same zero; in a sense we've located all the zeroes within $\epsilon$, but we don't know which $a_j$ are near more than one zero. If we knew that $|z_i-z_j|\ge 5\epsilon$ for every two zeroes $z_i\ne z_j$ we could fix that. (Because then $D(a_j,\epsilon)$ contains exactly one zero $z_j$ of $p$, and if $|a_i-a_j|<2\epsilon$ then $z_i=z_j$. So $(a_1,\dots,a_k)$ gives us $(z_1,\dots,z_n)$: We know there is a zero $z_1\in D(a_1,\epsilon)$. Call $a_1$ the approximation to $z_1$. Throw away the $a_i$ with $|a_i-a_1|<2\epsilon$ and continue...)
So we can fix that:
Lemma 1. Given $p$ with no multiple roots we can calculate $\epsilon>0$ such that any two roots of $p$ are separated by at least $\epsilon$.
First
Lemma 2. Given $p$ with no multiple roots we can calculate $\delta>0$ such that $p(z)=0$ implies $|p'(z)|\ge\delta$.
Proof: The Euclidean algorithm gives us $r_1,r_2\in\Bbb Q[z]$ such that $$1=r_1(z)p(z)+r_2(z)p'(z).$$Calculate $R$ so $p(z)=0$ implies $|z|\le R$, then calculate $\delta$ so that $|z|\le R$ implies $|r_2(z)|\le1/\delta$.
Proof of Lemma 2: Say all the zeroes of $p$ are contained in $|z|\le R$, and say $|p''(z)|\le c$ in $|z|\le R$. Let $\delta$ be as in Lemma 1, and assume $p(a)=0$. Then Taylor's theorem shows that $$|p(z)|\ge\delta|z-a|-\frac12 c|z-a|^2\quad(|z|\le R),$$so if $0<|z-a|<2\delta/c$ then $|p(z)|>0$.
So you begin the algorithm by first replacing $p$ by $p/gcd(p,p')$ to ensure there are no multiple zeroes and then decreasing $\epsilon$ if necessary to ensure all the zeroes are separated by $3\epsilon$.
Right now this post sort of starts in the middle - I wanted to put the big news at the top (Lemma 1 is sommething that's been bugging me). It starts here:
Start: In fact the algorithm in the accepted answer does work, at least with a little elaboration regarding how we decide to reject or accept a given square during the subdivision.
Edit: For some reason I thought the accepted answer used a binary search, as below. Looking again I see it doesn't, it starts with a grid of squares of size $\epsilon$. So never mind what I said about the accepted answer. It seems likely that the binary search below is much more efficient, since a square of size much larger than $\epsilon$ will often be rejected in one step. Also there's a proof below that the binary search works.
Regarding efficiency: I made a little Python implementation (see implementation below) and searched for the roots of $z^2+1$ with various $\epsilon$. The results appeared "instantaneously". Didn't bother figuring out how to actually time it, but I did count the number of iterations, i.e. the number of squares "tested". I got this:
eps = 0.1: count = 516
eps = 0.01: count = 868
eps = 0.001: count = 1156
eps = 0.0001: count = 1428
eps = 1e-05: count = 1812
eps = 1e-06: count = 2100
eps = 1e-07: count = 2388
eps = 1e-08: count = 2772
eps = 1e-09: count = 3060
eps = 1e-10: count = 3348
Note that's obsolete. With the improved version of reject($Q$) below we get count = 1108 for $\epsilon = 10^{-10}$.
The dependence of the running time on $\epsilon$ appears to be much better than I expected - it certainly totally annihilates the $c/\epsilon^2$ in the accepted answer.
In hindsight this makes sense. Say we run the algorithm with $\epsilon=\epsilon_0$ and then again with $\epsilon=\epsilon_0/10$. The definition of reject($Q$) below is independent of $\epsilon$, so every square rejected the first time will also be rejected the second time; the only extra work on the second run is in subdividing the squares that were accepted the first time, and the number of accepted squares is very small. (With $z^2+1$ there were eight accepted squares for each $\epsilon$, which is what I expected, or at least hoped.) Hmm, by that argument it should be something like $\log(1/\epsilon)$, which seems roughly consistent with the experiment above.
It "works" in this sense: Given $\epsilon>0$ we can find finitely many points $a_1,\dots a_k$ such that every $a_j$ is within $\epsilon$ of a zero and every zero is within $\epsilon$ of some $a_j$.
Which is to say we can approximate the zero set in the Hausdorff metric. That seems like it should be good enough. It doesn't work perfectly, in that it could happen that two zeroes are close to the same $a_j$. It's clear that multiple roots are going to cause problems with this sort of search; seems clear that two roots close together are also going to make things harder. We can eliminate multiple roots but there's not much we can do about two roots close together...
It was said that the accepted algorithm doesn't work because $|p(z)|$ small doesn't imply that $z$ is close to a zero of $p$. But it does imply that under some conditions, for a suitable value of "small":
Lemma 3. Suppose $p$ is entire, $p(a)\ne0$ and $p'(a)\ne 0$. Let $r=2|p(a)|/|p'(a)|$, and choose $\delta_2>0$ such that $|p''(z)|\le \delta_2$ for $|z-a|\le r$. If $\frac12|p(a)|\delta_2<|p'(a)|^2$ then $p$ has exactly one zero in $|z-a|<r$.
Proof: Let $g(z)=p(a)+p'(a)(z-a)$ and note that $g$ has exactly one zero in $|z-a|<r$. If $|z-a|=r$ then $|g(z)|\ge r|p'(a)|-|p(a)|=|p(a)|$, while Taylor's theorem shows that $|p(z)-g(z)|\le\frac12 r^2\delta_2$. The hypothesis implies that $\frac12 r^2\delta_2<|p(a)|$, so Rouche's theorem says that $p$ has the same number of zeroes as $g$ in $|z-a|<r$.
Now for the algorithm: Let $\epsilon>0$. Replacing $p$ by $p/gcd(p,p')$, we may assume that every zero of $p$ is simple.
Given the coefficients of $p$ we can certainly find a (closed) square $Q_0$ that contains all the zeroes of $p$, and then we can find $\delta_1$ and $\delta_2$ such that $|p'(z)|\le\delta_1$ and $|p''(z)|\le\delta_2$ whenever $d(z,Q_0)\le\epsilon$.
We construct a tree of squares, dividing $Q_0$ into four equal subsquares and then dividing each of them into four subsquares, etc. For each square $Q$ obtained this way we perform two tests:
def reject($Q$):
Say $a$ is the center of $Q$ and let $r$ be the distance from $a$ to a corner of $Q$. If $|p(a)|>r\delta_1$ we declare $Q$ to be a terminal node and perform no further subdivision; we will say below that $Q$ was "rejected". Note that if $Q$ is rejected it's clear that $Q$ cannot contain a zero of $p$.
Edit: That's stupid, using a global upper bound on $|p'|$ for an upper bound in $Q$. In fact $\min(\delta_1,|p'(a)|+r\delta_2)$ is an upper bound for $|p'|$ in $Q$. So change the above: Reject $Q$ if $|p(a)|>r\min(\delta_1,|p'(a)|+r\delta_2)$. (This made a huge improvement in a search for the zeroes of $\sin(z)$ in $|z|<4$, since $\cos$ is usually much smaller than its global maximum.)
In fact, we get down to small squares very quickly, and once $Q$ is small it will rarely if ever happen that $\min(\delta_1,|p'(a)|+r\delta_2)=\delta_1$. So why not just forget $\delta_1$ and simplify things by saying we reject $Q$ if $|p(a)|>r(|p'(a)|+r\delta_2)$? That might work, but it messes up the proof below that the algorithm terminates.
def accept($Q$):
Say $a$ is the center of $Q$, and let $r=2|p(a)|/|p'(a)|\in[0,\infty]$. If $diam(Q)<\epsilon$, $\frac12|p(a)|\delta_2<|p'(a)|^2$ and $r<\epsilon$ we declare $Q$ to be a terminal node and perform no further subdivison; we will say below that $Q$ was "accepted". Note that if $Q$ is accepted then $p$ has exactly one zero in $|z-a|<r$, by the lemma. (Unless $p(a)=0$, in which case $r=0$; in any case $p$ has a zero in $|z-a|\le r$.)
Note it can and probably will happen that reject($Q$)=accept($Q$)=True. I didn't realize until I ran some code that this means it's important to test reject($Q$) first. Doing it the other way around increases the length of the list $a_1,\dots,a_k$ returned, with more $a_j$ clustering around each zero.
If the algorithm terminates we've partitioned $Q_0$ into finitely many squares, each of which was accepted or rejected, and we're done.
Why we're done, assuming the algorithm terminates: Say $a_1,\dots,a_k$ are the centers of the accepted squares. For every $j$, $p$ has exactly one root in $|z-a_j|\le r_j$; since $r_j<\epsilon$ this says every $a_j$ is within $\epsilon$ of a root. Conversely, since the rejected squares do not contain any roots, every root lies in some accepted square $Q$; since $diam(Q)<\epsilon$ this shows that every root is within $\epsilon$ of some $a_j$.
The algorithm terminates.
Proof: By Konig's lemmma it's enough to show that our tree has no infinite branches. Say $Q_1,Q_2,\dots$ is a sequence of squares such that $Q_{k+1}$ is one of thhe four quarters of $Q_k$; we need to show that some $Q_k$ is accepted or rejected.
Say $a_k$ is the center of $Q_k$. Say $a_k\to z$.
Suppose $p(z)\ne0$, and let $r_k$ be the distance from $a_k$ to a corner of $Q_k$. Then there certainly exists $k$ such that $|p(a_k)|>r_k\delta_1$, since $r_k\to0$ but $p(a_k)\not\to0$. So $Q_k$ is rejected.
Suppose $p(z)=0$. Then $p'(z)\ne0$. So $p(a_k)\to0$ and $p'(a_k)\not\to0$, and if you look at the conditions in the definition of "accepted" this makes it clear that some $Q_k$ was accepted.
Implementation: I may as well include the code I mentioned above, in case anyone wants to try it. Note that this is not exactly what's described above: search(Q_0,f,df,eps,d1,d2) works for any holmorphic function $f$, returning all the zeroes in $Q_0$. In particular it doesn't try to calculate a $Q_0$ that contains all the zeroes, since $f$ is not a polynnomial; also it doesn't calculate $\delta_1$ and $\delta_2$, you have to work out appropriate upper bounds d1 and d2 for $|f'|$ and $|f''|$ yourself. Also it's simply not right, in that it uses floating-point calculations to test whether $p(a)=0$. Not intended as a production version, just a quick and filthy test of whether the thing actually works. It does, much better than I expected... (If you get a SyntaxError or IndentError I screwed up adding four spaces to the start of each line.)
Anyway:
from math import sqrt
class square():
def __init__(Q, a, r):
"""If a= x+iy and r>0 then square(a,r) is supposed to represent the
square [x-r,x+r] x [y-r,y+r]."""
Q.a = a
Q.r = float(r)
def quarters(Q):
vs = [0.5+0.5j, 0.5-0.5j, -0.5+0.5j, -0.5-0.5j]
return [square(Q.a + v*Q.r, Q.r/2)for v in vs]
def search(Q_0,f,df,eps,d1,d2):
"""f should be analytic in the eps-nbd of Q_0. NOTE proof that the algorithm
terminates requires that all zeroes of f are simple. df=f', d1 d2 upper bounds
for |f'| and |f''| in eps-nbd of Q_0. Returns list approximating all zeroes in
Q_0, in the sense that everything in returned list is within eps of a zero
and every zero is within eps of something in list"""
s = sqrt(2)
def reject(Q):
#return abs(f(Q.a)) > s*d1*Q.r
#No! There's a better upper bound for the derivative in Q
#if the derivative at a happens to be small:
return abs(f(Q.a)) > s*min(d1,abs(df(Q.a))+d2*s*Q.r)*Q.r
def accept(Q):
z = abs(f(Q.a))
dz = abs(df(Q.a))
return (2*s*Q.r<eps) and (2*z<eps*abs(dz)) and (z*d2<2*dz*dz)
todo = [Q_0]
accepted = []
while len(todo)>0:
newtodo = []
for Q in todo:
for q in Q.quarters():
if not reject(q):
if accept(q):
accepted.append(q)
else:
newtodo.append(q)
todo = newtodo
return [Q.a for Q in accepted]
The wikipedia article http://en.wikipedia.org/wiki/Root-finding_algorithm gives links to many different methods for finding roots of polynomials. (Start at the section entitled "Finding roots of polynomials".) Many of the methods are incomparable, in the sense that they work faster or slower than others depending on the specific polynomial.