Understanding non-solvable algebraic numbers
How can one feel comfortable with non-solvable algebraic numbers?
The nice thing about solvable numbers is this idea that they have a formula. You can manipulate the formula as if it were actually a number using some algebra formalism that you probably have felt comfortable with for a while. For instance $\sqrt{3+\sqrt{6}}+2$ is an algebraic number. What do you get if you add it to $7$? Well $\left(\sqrt{3+\sqrt{6}}+2\right)+7=\sqrt{3+\sqrt{6}}+9$ seems like a decent answer. As a side note: there actually some reasonably hard algorithmic questions along these lines, but I'll assume they don't worry you. :-)
We'd like to be able to manipulate other algebraic numbers with similar comfort. The first method I was taught is pretty reasonable:
Kronecker's construction: If $x$ really is an algebraic number, then it is the root of some irreducible polynomial $x^n - a_{n-1} x^{n-1} - \ldots - a_1 x - a_0$. But how do we manipulate $x$? It's almost silly: we treat it just like $x$, and add and multiply as usual, except that $x \cdot x^{n-1}$ needs to be replaced by $a_{n-1} x^{n-1} + \ldots + a_1 x + a_0$, and division is handled by replacing $1/x$ with $( x^{n-1} - a_{n-1} x^{n-2} - \ldots - a_2 x - a_1)/a_0$. This is very similar to "integers mod n" where you replace big numbers by their remainder mod n,. In fact this is just replacing a polynomial in $x$ with its remainder mod $x^n - a_{n-1} x^{n-1} - \ldots - a_1 x - a_0$.
I found it somewhat satisfying, but in many ways it is very mysterious. We use the same symbol for many different algebraic numbers; each time we have to keep track of the $f(x)$ floating in the background. Also it raises deep questions about how to tell two algebraic numbers apart. Luckily more or less all of these questions have clean algorithmic answers, and they are described in Cohen's textbooks CCANT (A Course in Computational Algebraic Number Theory, Henri Cohen, 1993).
Companion matrices: But years later, it still bugged me. Then I studied splitting fields of group representations. The crazy thing about these fields is that they are subrings of matrix rings. So “numbers” were actually matrices. You've probably seen some tricks like this $$\mathbb{C} = \left\{ \begin{bmatrix} a & b \\ -b & a \end{bmatrix} : a,b \in \mathbb{R} \right\}$$ where we can make a bigger field out of matrices over a smaller field. It turns out that is always true: If $K \leq F$ are fields, then $F$ is a $K$-vector space, and the function $f:F \to M_n(K) : x \mapsto ( y \mapsto xy )$ is an injective homomorphism of fields, so that $f(F)$ is a field isomorphic to $F$ but whose “numbers” are just $n \times n$ matrices over $K$, where $n$ is the dimension of $F$ as a $K$-vector space (and yes $n$ could be infinite if you want, but it's not).
That might seem a little complicated, but $f$ just says "what does multiplying look like?" For instance if $\mathbb{C} = \mathbb{R} \oplus \mathbb{R} i$ then multiplying $a+bi$ sends $1$ to $a+bi$ and $i$ to $-b+ai$. The first row is $[a,b]$ and the second row $[-b,a]$. Too easy.
Ok, fine, but that assumes you already know how to multiply, and perhaps you are not yet comfortable enough to multiply non-solvable algebraic numbers! Again we use the polynomial $x^n - a_{n-1} x^{n-1} - \ldots - a_1 x - a_0$, but this time as a matrix. We use the same rule, viewing $F=K \oplus Kx \oplus Kx^2 \oplus \ldots \oplus Kx^{n-1}$ and ask what $x$ does to each basis element: well $x^i$ is typically sent to $x^{i+1}$. It's only the last one that things get funny:
$$f(x) = \begin{bmatrix} 0 & 1 & 0 & 0 & \ldots & 0 & 0 \\ 0 & 0 & 1 & 0 & \ldots & 0 & 0 \\ 0 & 0 & 0 & 1 & \ldots & 0 & 0 \\ & & & & \ddots & & \\ 0 & 0 & 0 & 0 & \ldots & 1 & 0 \\ 0 & 0 & 0 & 0 & \ldots & 0 & 1 \\ a_0 & a_1 & a_2 & a_3 & \ldots & a_{n-2} & a_{n-1} \end{bmatrix}$$
So this fancy “number” $x$ just becomes a matrix, most of whose entries are $0$. For instance $x^2 - (-1)$ gives the matrix $i = \left[\begin{smallmatrix} 0 & 1 \\ -1 & 0 \end{smallmatrix}\right]$.
This nice part here is that different algebraic numbers can actually have different matrix representations. The dark part is making sure that if you have two unrelated algebraic numbers that they actually multiply up like a field. You see $M_n(K)$ has many subfields, but is not itself a field, so you have to choose matrices that both lie within a subfield. Now for splitting fields and centralizer fields and all sorts of handy dandy fancy fields, you absolutely can make sure everything you care about comes from the field. Starting from just a bunch of polynomials though, you need to be careful and find a single polynomial that works for both. This is called the primitive element theorem.
This also lets you see the difference between eigenvalues in the field $K$ and eigenvalues (“numbers”) in the field $F$: the former are actually numbers, or multiples of the identity matrix, while the latter are full-fledged matrices that happen to lie in a subfield. If you ever studied the “real form” of the eigenvalue decomposition with $2\times 2$ blocks, those $2 \times 2$ blocks are exactly the $\begin{bmatrix}a&b\\-b&a\end{bmatrix}$ complex numbers.
The fundamental theorem of algebra tells us that every polynomial of degree $n$ has $n$ (not necessarily distinct) roots in the complex numbers.
Polynomials which are not solvable by radicals have (at least one) root that cannot be written by any combination of the operations of addition, multiplication, and the taking of $n$th roots. An example is roots of $x^5-x+1$. That doesn't mean that they don't exist, though- they're just real numbers that you can't get to using only those operations. You can sometimes describe them with other types of operations than addition, multiplication, and $n^\text{th}$ roots, for example you can approximate them to arbitrary precision using Newton's method (much like transcendental numbers).
Transcendental numbers are different because they aren't the root of any polynomial. $\pi$ is an example.
For algebraic numbers, we can always simplify when there is a power greater than the degree of that number. If $$p(x)=a_0+a_1x+a_2x^2+\cdots +a_nx^n$$ is the smallest polynomial (with respect to degree) for which $p(\alpha)=0$, then whenever we see $\alpha^m$ for $m\geq n=\operatorname{deg}(p)$, we can rearrange the equation $$a_0+a_1\alpha+a_2\alpha^2+\cdots +a_n\alpha^n=0$$ to solve for $\alpha^n$ in terms of $\alpha^i$ for $i\leq 1$. So, any expression involving $\alpha$ can be rewritten using only powers of $\alpha$ between $0$ and $n-1$.
On the other hand, no matter what polynomial we evaluate at $\pi$, we can never simplify more than simply inserting $\pi$ instead of $x$.
"this about trying to understand what kind of numbers are in the set of non-solvable algebraic numbers...."
I don't think it's easy to say much about the algebraics that can't be expressed in radicals, once you have said that they are the ones that can't be expressed in radicals.
Suppose you have some irreducible polynomial of degree at least 2, with integer coefficients. Its roots are algebraic numbers. Whether they are expressible in radicals or not is, in general, hard to tell. There is a procedure for determining the "Galois group" of the polynomial; then, there is a procedure for determining whether or not that group is a "solvable" group. If the group is solvable, then the roots of the polynomial can be expressed in radicals; if not, not.
I hope you get to study Galois Theory some day --- it is a beautiful subject.