Understanding power series and their representation of functions
Your last point about transcendental functions is not correct. Transcendental functions are those which are not algebraic. They may or may not have power series representation. Moreover by Weierstrass theorem any continuous function can be approximated nicely by polynomials irrespective of the fact that the function has a power series representation or not. So power series are not the only existing tool for polynomial approximation.
So we should not try to link power series representation with the transcendental nature of the function. Functions like $\sqrt{1 + x}$ or $1/(1 - x)$ also have power series representation and they are clearly algebraic.
The fundamental aspect of a power series is that it is analytic in the region of convergence and that any function analytic in certain can be represented by some power series.
Well, apart from what Paramanand Singh wrote, you have a decently good understanding of how power series work.
"I feel that Power Series is something that it treated very poorly in many (introductory) textbooks. It seems as though authors keep on dodging the central ideas of Power Series, and their relation to functions, for seemingly unknown reasons."
Well, that should be natural, since introductory textbooks try to make it easier for the reader to understand something. For example, in Calculus, power series are one of the shortest topics we cover simply because we don't really need them. Approximate? We have calculators. The point of a power series is often to approximate something, and they are often how our calculators calculate trig functions for example.
"An analytic function is equal to its power series representation within the power series' radius of convergence"
This is true, but there is an even deeper meaning to analytic functions. Analytic functions are the pathway to analytic continuation. The idea is to take properties and relationships from the power series and apply them to the original function, even when the power series doesn't converge. Another use is to extend the domain of the original function. The Riemann zeta function is a famous example:
The Riemann zeta function is a meromorphic (analytic plus some other things) function given be the following series for the domain the series converges.
$$\zeta(z):=\sum_{n=1}^\infty\frac1{n^z}\tag{$\Re(z)>1$}$$
There is the very similar Dirichlet eta function:
$$\eta(z):=\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^z}\tag{$\Re(z)>0$}$$
and Euler proved the relationship
$$\zeta(z)=\frac1{1-2^{1-z}}\eta(z)$$
using only the above definitions. Since they are analytic functions, the relationship holds even when the original definitions do not converge, effectively giving a new definition of the Riemann zeta function that has a larger domain than the original:
$$\zeta(z)=\frac1{1-2^{1-z}}\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^z}\tag{$\Re(z)>0$}$$
Similarly, if you know of the Taylor expansion theorem, you can extend the Riemann zeta function to $z\in\mathbb C$ with the exception of the singularity $z=1$. Interestingly, you can do this by taking the Taylor series around $a=0.1$, which is within the domain extended by the Dirichlet eta function. After that, you can take the expansion around $a=-0.5$, then using that, you can take it around $-1.0$, then $-1.5$, and effectively as long as you want.
Another famous one is Euler's formula:
$$e^{i\theta}=\cos(\theta)+i\sin(\theta)$$
which follows from the power series expansion of both sides. It effectively extends the domain (again) of a function ($e^x$ and trig functions) outside their original domains.
So yes, the original goal of power series was to approximate things, but they have more than just the ability to approximate, they have the ability to provide deeper meaning to things that originally did not have such meaning. They are often the key to special relationships between functions and the extension of functions, which in the example of $e^{i\theta}$, can be very important not only in the field of math, but in physical applications.
It's not that your understanding is wrong, it's more that the wording of the question suggests that the definitions, properties, applications, and "cultural context" of power series are a bit tangled.
For concreteness and definiteness, let's work over the real numbers. If $x_{0}$ is a real number, and if $(a_{k})_{k=0}^{\infty}$ is an arbitrary sequence of real numbers, the associated power series with coefficients $(a_{k})$ and center $x_{0}$ is the expression $$ \sum_{k=0}^{\infty} a_{k} (x - x_{0})^{k}. \tag{1} $$
Power series make sense as purely algebraic entities. For example, two power series (with the same center) can be added termwise and multiplied using the "Cauchy product": \begin{gather*} \sum_{k=0}^{\infty} a_{k} (x - x_{0})^{k} + \sum_{k=0}^{\infty} b_{k} (x - x_{0})^{k} = \sum_{k=0}^{\infty} (a_{k} + b_{k}) (x - x_{0})^{k}, \\ \left(\sum_{k=0}^{\infty} a_{k} (x - x_{0})^{k}\right) \left(\sum_{k=0}^{\infty} b_{k} (x - x_{0})^{k}\right) = \sum_{k=0}^{\infty} \left(\sum_{j=0}^{k} a_{j} b_{k - j}\right) (x - x_{0})^{k}. \end{gather*} These definitions make sense whether or not the individual series converge (next item). The crucial point is, each coefficient of the sum or product is a finite algebraic expression in the coefficients of the "operands".
For each real number $x$, the power series (1) is an infinite series of real numbers, which may converge (the sequence of partial sums $$ s_{n}(x) = \sum_{k=0}^{n} a_{k} (x - x_{0})^{k} $$ converges to a finite limit) or diverge (otherwise). Clearly, (1) converges for $x = x_{0}$. It's not difficult to show that: a. If (1) converges for some $x$ with $|x - x_{0}| = r$, then (1) converges for every $x$ with $|x - x_{0}| < r$;
b. If (1) diverges for some $x$ with $|x - x_{0}| = r$, then (1) diverges for every $x$ with $|x - x_{0}| > r$.
It follows that for every power series (1), there exists an extended non-negative real number $R$ (i.e., $0 \leq R \leq \infty$) such that (1) converges for $|x - x_{0}| < R$ and diverges for $|x - x_{0}| > R$. (If $R = 0$, the former condition is empty; if $R = \infty$, the latter is empty.) This $R$ is called the radius of the power series (1). The power series $$ \sum_{k=0}^{\infty} k!\, x^{k},\qquad \sum_{k=0}^{\infty} \frac{x^{k}}{R^{k}},\qquad \sum_{k=0}^{\infty} \frac{x^{k}}{k!} $$ have radii $0$, $R$, and $\infty$, respectively.
If (1) has positive radius (i.e., $0 < R$), the sum of the series defines a function $$ p(x) = \sum_{k=0}^{\infty} a_{k} (x - x_{0})^{k} $$ whose domain is the set of $x$ with $|x - x_{0}| < R$. In this region, $p$ is infinitely differentiable, and its derivatives are found by termwise differentiation, e.g., $$ p'(x) = \sum_{k=1}^{\infty} ka_{k} (x - x_{0})^{k-1} = \sum_{k=0}^{\infty} (k + 1) a_{k+1} (x - x_{0})^{k}. $$ Each derivative is itself a power series, convergent in the same region.
A function $f$ is analytic at $x_{0}$ if $f$ is represented by a convergent power series in some open interval about $x_{0}$, and is analytic if $f$ is analytic at $x_{0}$ for every interior point $x_{0}$ of the domain of $f$.
If $f$ is infinitely-differentiable at some point $x_{0}$, the Taylor series of $f$ at $x_{0}$ is the power series $$ \sum_{k=0}^{\infty} \frac{f^{(k)}(x_{0})}{k!} (x - x_{0})^{k}. $$ As is well-known, a function can be infinitely differentiable at $x_{0}$ without the Taylor series converging to $f$ in any open interval of $x_{0}$. The standard example is $f(x) = e^{-1/x^{2}}$ if $x \neq 0$, and $f(0) = 0$, whose Taylor series is identically zero.
A transcendental function is real-analytic by definition. As Paramanand Singh notes, the defining property of a transcendental function is not "requires an infinite series" (i.e., "not a polynomial"), but "does not satisfy a polynomial equation in two variables" (i.e., "is not an algebraic function").
Power series are important for many reasons. The most common rationale for introducing them in calculus is to obtain algebraic/analytic expressions for the exponential function (and the closely related circular and hyperbolic functions, and power functions with non-integer exponents), etc., etc. For example, from the exponential power series, one obtains $$ e = \exp(1) = \sum_{k=0}^{\infty} \frac{1}{k!}, $$ from which one can easily show $e$ is irrational.
Another application is to obtain power series solutions of linear ordinary differential equations with non-constant coefficients. Power series are by no means the only infinite series useful for studying functions; Fourier series and wavelets come immediately to mind.
As for generalizations: Power series with complex coefficients (and complex centers) make perfect sense; the wordings and notation above are chosen to minimize the modifications required to consider complex power series. There are useful notions of matrix-valued power series, operator-valued power series, etc.
Complex power series exhibit interesting behavior (such as monodromy) not encountered over the reals, because a connected open set in the plane need not be simply-connected. Further, every holomorphic (i.e., complex-differentiable) function of one complex variable is automatically analytic. The logical strength of being "once complex-differentiable" gives complex analysis a completely different flavor from real analysis.