Figure out the roots from the Dynkin diagram
Here's an answer in the simply-laced case. Its proof, and generalization to non-simply-laced, are left to the reader.
1) Start with a simple root, and think of it as a labeling of the Dynkin diagram with a 1 there and 0s elsewhere.
2) Look for a vertex whose label is < 1/2 the sum of the surrounding labels. Increment that label. You've found a root!
3) Go back to (2), unless there is no such vertex anymore. You've found the highest root!
Take the union over all such games, and you get all the positive roots. Include their negatives, and you have all roots.
If you start with a non-Dynkin-diagram, the game doesn't terminate. This is part of a way to classify the Dynkin diagrams.
[EDIT: yes it can terminate. There's a variant, where you replace the vertex label by the sum of the surrounding, minus the original label. That game terminates only for Dynkin diagrams.]
BTW at the highest root, most of the vertices have labels = 1/2 the surrounding sum. If you put in a new vertex, connected to those vertices with > 1/2 the sum, you get the affine Dynkin diagram.
This question is answered, probably in many textbooks on Lie algebras, Chevalley groups, representation theory, etc.. I always think that Bourbaki's treatment of Lie groups and algebras is a great place to look (but I don't have it with me at the moment). I also tend to look things up in Humphreys, and Knapp's "Lie Groups, Beyond an Introduction"
Here's a method that's very good for pen-and-paper computations, and suffices for many examples. I believe that it will algorithmically answer your question in general as well.
The key is the "rank two case", and the key result is the following fact about root strings. I'll assume that we are working with the root system associated to a semisimple complex Lie algebra. Let $\Phi$ be the set of roots (zero is not a root, for me and most authors).
Definition: If $\alpha, \beta$ are roots, the $\beta$-string through $\alpha$ is the set of elements of $\Lambda \cup \{ 0 \}$ of the form $\alpha + n \beta$ for some integer $n$.
Theorem: There exist integers $p \leq 0$ and $q \geq 0$, such that the $\beta$-string through $\alpha$ has the form: $$\{ \alpha + n \beta : p \leq n \leq q \}.$$ Furthermore, the length of this string can be determined from the inner products: $$p+q = \frac{ -2 \langle \alpha, \beta \rangle }{\langle \beta, \beta \rangle}.$$
Example: Let $\alpha$ be the short root and $\beta$ be the long root, corresponding to the two vertices of the Dynkin diagram of type $G_2$. The $\alpha$-string through $\beta$ consists of: $$\beta, \beta+\alpha, \beta + 2 \alpha, \beta + 3 \alpha,$$ as remarked in the question.
Here $p = 0$ and $q = 3$. Indeed, we find that $$3 = 0+3 = \frac{ -2 \langle \beta, \alpha \rangle }{\langle \alpha,\alpha \rangle} = \frac{(-2)(-3)}{2},$$ using the Cartan matrix (which is easily encoded in the Dynkin diagram).
Perhaps you already knew this, since you found those roots "clear". But from here, you can continue again with $\alpha$ and $\beta$ root strings through these roots.
For example, let's consider the $\beta$-string through $\beta + 3 \alpha$. These are the roots of the form $\beta + 3 \alpha + n \beta$, for integers $p \leq n \leq q$. We know that $p = 0$, since for $n = -1$ we find that $3 \alpha$ is not a root. The length of the root string is: $$q = \frac{ -2 \langle \beta+3\alpha, \beta \rangle}{\langle \beta, \beta \rangle} = \frac{ -2(2 - 3)}{2} = \frac{2}{2} = 1.$$ From this we find that (corresponding to $n = q = 1$), there is another root $2 \beta + 3 \alpha$, and furthermore, $m \beta + 3 \alpha$ is not a root for $m \geq 3$.
By using root strings, together with bounds on how long roots can be, one can find all of the roots without taking too much time. It should also be mentioned that, for a simple root system, there is a unique "highest root", in which the simple roots occur with maximal multiplicity. The multiplicities of simple roots in the highest root can be looked up in any decent table, and computed quickly by hand for type A-D-E (using a trick from the McKay correspondence). This is useful for a bound, so you don't mess around with root strings for longer than necessary.
Looking at Fulton and Harris like Mariano suggests is a good idea, but here's another answer which might be helpful to think about.
In the simply-laced case, put an orientation on the edges. Then the positive roots are in bijection with indecomposable representations (over the complex numbers) of the corresponding quiver. More precisely, once we pick a way to label the nodes with simple roots, the dimensions of an indecomposable representation give the coefficients of a linear combination for a positive root. So for example, in type A, a positive linear combination of simple roots is a root only if its support is connected (otherwise you can write it as a direct sum of the two pieces). This also tells you that the coefficients have to be 1 if you play around with it.
For the non simply-laced case, we can reduce to the simply laced case via folding. So for example, $G_2$ is the folding of $D_4$ by the order 3 automorphism $\sigma$ which spins it around. The right replacement (for our purposes) for representations of a "$G_2$ quiver" are representations of $D_4$ which are invariant under $\sigma$. And then again positive roots correspond to the dimensions of the indecomposable $\sigma$-invariant representations (indecomposable considered as an object in the category of $\sigma$-invariant representations). See Hubery's paper http://www.ams.org/mathscinet-getitem?mr=2025328 for details.
So for example, the short root corresponds to the middle node in $D_4$, while the long root corresponds to the orbit of 3 nodes. Orient the edges of $D_4$ inward, and put ${\bf C}^3$ in the middle node with basis $e_1, e_2, e_3$. Set the outer nodes to be two dimensional, and set their images to be the subspaces $\langle e_1, e_2 \rangle$, $\langle e_2, e_3 \rangle$, and $\langle e_3, e_1 \rangle$ respectively. Then this representation is $\sigma$-invariant and has no $\sigma$-invariant summands (though it is decomposable as a $D_4$-representation. This corresponds to the root $3a+2b$.
This also works for non-Dynkin diagrams.