Why are vector spaces not isomorphic to their duals?
This is just Bill Dubuque's sci.math proof (see Google Groups or MathForum) mentioned in the comments, expanded.
Edit. I'm also reorganizing this so that it flows a bit better.
Let $F$ be a field, and let $V$ be the vector space of dimension $\kappa$.
Then $V$ is naturally isomorphic to $\mathop{\bigoplus}\limits_{i\in\kappa}F$, the set of all functions $f\colon \kappa\to F$ of finite support. Let $\epsilon_i$ be the element of $V$ that sends $i$ to $1$ and all $j\neq i$ to $0$ (that is, you can think of it as the $\kappa$-tuple with coefficients in $F$ that has $1$ in the $i$th coordinate, and $0$s elsewhere).
Lemma 1. If $\dim(V)=\kappa$, and either $\kappa$ or $|F|$ are infinite, then $|V|=\kappa|F|=\max\{\kappa,|F|\}$.
Proof. If $\kappa$ is finite, then $V=F^{\kappa}$, so $|V|=|F|^{\kappa}=|F|=|F|\kappa$, as $|F|$ is infinite here and the equality holds.
Assume then that $\kappa$ is infinite. Each element of $V$ can be represented uniquely as a linear combination of the $\epsilon_i$. There are $\kappa$ distinct finite subsets of $\kappa$; and for a subset with $n$ elements, we have $|F|^n$ distinct vectors in $V$.
If $\kappa\leq |F|$, then in particular $F$ is infinite, so $|F|^n=|F|$. Hence you have $|F|$ distinct vectors for each of the $\kappa$ distinct subsets (even throwing away the zero vector), so there is a total of $\kappa|F|$ vectors in $V$.
If $|F|\lt\kappa$, then $|F|^n\lt\kappa$ since $\kappa$ is infinite; so there are at most $\kappa$ vectors for each subset, so there are at most $\kappa^2 = \kappa$ vectors in $V$. Since the basis has $\kappa$ elements, $\kappa\leq|V|\leq\kappa$, so $|V|=\kappa=\max\{\kappa,|F|\}$. QED
Now let $V^*$ be the dual of $V$. Since $V^* = \mathcal{L}(V,F)$ (where $\mathcal{L}(V,W)$ is the vector space of all $F$-linear maps from $V$ to $W$), and $V=\mathop{\oplus}\limits_{i\in\kappa}F$, then again from abstract nonsense we know that $$V^*\cong \prod_{i\in\kappa}\mathcal{L}(F,F) \cong \prod_{i\in\kappa}F.$$ Therefore, $|V^*| = |F|^{\kappa}$.
Added. Why is it that if $A$ is the basis of a vector space $V$, then $V^*$ is equivalent to the set of all functions from $A$ to the ground field?
A functional $f\colon V\to F$ is completely determined by its value on a basis (just like any other linear transformation); thus, if two functionals agree on $A$, then they agree everywhere. Hence, there is a natural injection, via restriction, from the set of all linear transformations $V\to F$ (denoted $\mathcal{L}(V,F)$) to the set of all functions $A\to F$, $F^A\cong \prod\limits_{a\in A}F$. Moreover, given any function $g\colon A\to F$, we can extend $g$ linearly to all of $V$: given $\mathbf{x}\in V$, there exists a unique finite subset $\mathbf{a}_1,\ldots,\mathbf{a}_n$ (pairwise distinct) of $A$ and unique scalars $\alpha_1,\ldots,\alpha_n$, none equal to zero, such that $\mathbf{x}=\alpha_1\mathbf{a}_1+\cdots\alpha_n\mathbf{a}_n$ (that's from the definition of basis as a spanning set that is linearly independent; spanning ensures the existence of at least one such expression, linear independence guarantees that there is at most one such expression); we define $g(\mathbf{x})$ to be $$g(\mathbf{x})=\alpha_1g(\mathbf{a}_1)+\cdots \alpha_ng(\mathbf{a}_n).$$ (The image of $\mathbf{0}$ is the empty sum, hence equal to $0$). Now, let us show that this is linear.
First, note that $\mathbf{x}=\beta_1\mathbf{a}_{i_1}+\cdots\beta_m\mathbf{a}_{i_m}$ is any expression of $\mathbf{x}$ as a linear combination of pairwise distinct elements of the basis $A$, then it must be the case that this expression is equal to the one we already had, plus some terms with coefficient equal to $0$. This follows from the linear independence of $A$: take $$\mathbf{0}=\mathbf{x}-\mathbf{x} = (\alpha_1\mathbf{a}_1+\cdots\alpha_n\mathbf{a}_n) - (\beta_1\mathbf{a}_{i_1}+\cdots+\beta_m\mathbf{a}_{i_m}).$$ After any cancellation that can be done, you are left with a linear combination of elements in the linearly independent set $A$ equal to $\mathbf{0}$, so all coefficients must be equal to $0$. That means that we can likewise define $g$ as follows: given any expression of $\mathbf{x}$ as a linear combination of elements of $A$, $\mathbf{x}=\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m$, with $\mathbf{a}_i\in A$, not necessarily distinct, $\gamma_i$ scalars not necessarily equal to $0$, we define $$g(\mathbf{x}) = \gamma_1g(\mathbf{a}_1)+\cdots+\gamma_mg(\mathbf{a}_m).$$ This will be well-defined by the linear independence of $A$. And now it is very easy to see that $g$ is linear on $V$: if $\mathbf{x}=\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m$ and $\mathbf{y}=\delta_{1}\mathbf{a'}_1+\cdots+\delta_n\mathbf{a'}_n$ are expressions for $\mathbf{x}$ and $\mathbf{y}$ as linear combinations of elements of $A$, then $$\begin{align*} g(\mathbf{x}+\lambda\mathbf{y}) &= g\Bigl(\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m+\lambda(\delta_{1}\mathbf{a'}_1+\cdots+\delta_n\mathbf{a'}_n)\Bigr)\\ &= g\Bigl(\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m+ \lambda\delta_{1}\mathbf{a'}_1+\cdots+\lambda\delta_n\mathbf{a'}_n\\ &= \gamma_1g(\mathbf{a}_1) + \cdots \gamma_mg(\mathbf{a}_m) + \lambda\delta_1g(\mathbf{a'}_1) + \cdots + \lambda\delta_ng(\mathbf{a'}_n)\\ &= g(\mathbf{x})+\lambda g(\mathbf{y}). \end{align*}$$
Thus, the map $\mathcal{L}(V,F)\to F^A$ is in fact onto, giving a bijection.
This is the "linear-algebra" proof. The "abstract nonsense proof" relies on the fact that if $A$ is a basis for $V$, then $V$ is isomorphic to $\mathop{\bigoplus}\limits_{a\in A}F$, a direct sum of $|A|$ copies of $A$, and on the following universal property of the direct sum:
Definition. Let $\mathcal{C}$ be an category, let $\{X_i\}{i\in I}$ be a family of objects in $\mathcal{C}$. A coproduct of the $X_i$ is an object $C$ of $\mathcal{C}$ together with a family of morphisms $\iota_j\colon X_j\to C$ such that for every object $X$ and ever family of morphisms $g_j\colon X_j\to X$, there exists a unique morphism $\mathbf{f}\colon C\to X$ such that for all $j$, $g_j = \mathbf{g}\circ \iota_j$.
That is, a family of maps from each element of the family is equivalent to a single map from the coproduct (just like a family of maps into the members of a family is equivalent to a single map into the product of the family). In particular, we get that:
Theorem. Let $\mathcal{C}$ be a category in which the sets of morphisms are sets; let $\{X_i\}_{i\in I}$ be a family of objects of $\mathcal{C}$, and let $(C,\{\iota_j\}_{j\in I})$ be their coproduct. Then for every object $X$ of $\mathcal{C}$ there is a natural bijection $$\mathrm{Hom}_{\mathcal{C}}(C,X) \longleftrightarrow \prod_{j\in I}\mathrm{Hom}_{\mathcal{C}}(X_j,X).$$
The left hand side is the collection of morphisms from the coproduct to $X$; the right hand side is the collection of all families of morphisms from each element of $\{X_i\}_{i\in I}$ into $X$.
In the vector space case, the fact that a linear transformation is completely determined by its value on a basis is what establishes that a vector space $V$ with basis $A$ is the coproduct of $|A|$ copies of the one-dimensional vector space $F$. So we have that $$\mathcal{L}(V,W) \leftrightarrow \mathcal{L}\left(\mathop{\oplus}\limits_{a\in A}F,W\right) \leftrightarrow \prod_{a\in A}\mathcal{L}(F,W).$$ But a linear transformation from $F$ to $W$ is equivalent to a map from the basis $\{1\}$ of $F$ into $W$, so $\mathcal{L}(F,W) \cong W$. Thus, we get that if $V$ has a basis of cardinality $\kappa$ (finite or infinite), we have: $$\mathcal{L}(V,F) \leftrightarrow \mathcal{L}\left(\mathop{\oplus}_{i\in\kappa}F,F\right) \leftrightarrow \prod_{i\in\kappa}\mathcal{L}(F,F) \leftrightarrow \prod_{i\in\kappa}F = F^{\kappa}.$$
Lemma 2. If $\kappa$ is infinite, then $\dim(V^*)\geq |F|$.
Proof. If $F$ is finite, then the inequality is immediate. Assume then that $F$ is infinite. Let $c\in F$, $c\neq 0$. Define $\mathbf{f}_c\colon V\to F$ by $\mathbf{f}_c(\epsilon_n) = c^n$ if $n\in\omega$, and $\mathbf{f}_c(\epsilon_i)=0$ if $i\geq\omega$. These are linearly independent:
Suppose that $c_1,\ldots,c_m$ are pairwise distinct nonzero elements of $F$, and that $\alpha_1\mathbf{f}_{c_1} + \cdots + \alpha_m\mathbf{f}_{c_m} = \mathbf{0}$. Then for each $i\in\omega$ we have $$\alpha_1 c_1^i + \cdots + \alpha_m c_m^i = 0.$$ Viewing the first $m$ of these equations as linear equations in the $\alpha_j$, the corresponding coefficient matrix is the Vandermonde matrix, $$\left(\begin{array}{cccc} 1 & 1 & \cdots & 1\\ c_1 & c_2 & \cdots & c_m\\ c_1^2 & c_2^2 & \cdots & c_m^2\\ \vdots & \vdots & \ddots & \vdots\\ c_1^{m-1} & c_2^{m-1} & \cdots & c_m^{m-1} \end{array}\right),$$ whose determinant is $\prod\limits_{1\leq i\lt j\leq m}(c_j-c_i)\neq 0$. Thus, the system has a unique solution, to wit $\alpha_1=\cdots=\alpha_m = 0$.
Thus, the $|F|$ linear functionals $\mathbf{f}_c$ are linearly independent, so $\dim(V^*)\geq |F|$. QED
To recapitulate: Let $V$ be a vector space of dimension $\kappa$ over $F$, with $\kappa$ infinite. Let $V^*$ be the dual of $V$. Then $V\cong\mathop{\bigoplus}\limits_{i\in\kappa}F$ and $V^*\cong\prod\limits_{i\in\kappa}F$.
Let $\lambda$ be the dimension of $V^*$. Then by Lemma 1 we have $|V^*| = \lambda|F|$.
By Lemma 2, $\lambda=\dim(V^*)\geq |F|$, so $|V^*| = \lambda$. On the other hand, since $V^*\cong\prod\limits_{i\in\kappa}F$, then $|V^*|=|F|^{\kappa}$.
Therefore, $\lambda= |F|^{\kappa}\geq 2^{\kappa} \gt \kappa$. Thus, $\dim(V^*)\gt\dim(V)$, so $V$ is not isomorphic to $V^*$.
Added${}^{\mathbf{2}}$. Some results on vector spaces and bases.
Let $V$ be a vector space, and let $A$ be a maximal linearly independent set (that is, $A$ is linearly independent, and if $B$ is any subset of $V$ that properly contains $A$, then $B$ is linearly dependent).
In order to guarantee that there is a maximal linearly independent set in any vector space, one needs to invoke the Axiom of Choice in some manner, since the existence of such a set is, as we will see below, equivalent to a basis; however, here we are assuming that we already have such a set given. I believe that the Axiom of Choice is not involved in any of what follows.
Proposition. $\mathrm{span}(A) = V$.
Proof. Since $A\subseteq V$, then $\mathrm{span}(A)\subseteq V$. Let $\mathbf{v}\in V$. If $v\in A$, then $v\in\mathrm{span}(A)$. If $v\notin A$, then $B=V\cup\{v\}$ is linearly dependent by maximality. Therefore, there exists a finite subset $a_1,\ldots,a_m$ in $A$ and scalars $\alpha_1,\ldots,\alpha_n$, not all zero, such that $\alpha_1a_1+\cdots+\alpha_ma_m=\mathbf{0}$. Since $A$ is linearly independent, at least one of the $a_i$ must be equal to $v$; say $a_1$. Moreover, $v$ must occur with a nonzero coefficient, again by the linear independence of $A$. So $\alpha_1\neq 0$, and we can then write $$v = a_1 = \frac{1}{\alpha_1}(-\alpha_2a_2 -\cdots - \alpha_na_n)\in\mathrm{span}(A).$$ This proves that $V\subseteq \mathrm{span}(A)$. $\Box$
Proposition. Let $V$ be a vector space, and let $X$ be a linearly independent subset of $V$. If $v\in\mathrm{span}(X)$, then any two expressions of $v$ as linear combinations of elements of $X$ differ only in having extra summands of the form $0x$ with $x\in X$.
Proof. Let $v = a_1x_1+\cdots a_nx_n = b_1y_1+\cdots+b_my_m$ be two expressions of $v$ as linear combinations of $X$.
We may assume without loss of generality that $n\leq m$. Reordering the $x_i$ and the $y_j$ if necessary, we may assume that $x_1=y_1$, $x_2=y_2,\ldots,x_{k}=y_k$ for some $k$, $0\leq k\leq n$, and $x_1,\ldots,x_k,x_{k+1},\ldots,x_n,y_{k+1},\ldots,y_m$ are pairwise distinct. Then $$\begin{align*} \mathbf{0} &= v-v\\ &=(a_1x_1+\cdots+a_nx_n)-(b_1y_1+\cdots+b_my_m)\\ &= (a_1-b_1)x_1 + \cdots + (a_k-b_k)x_k + a_{k+1}x_{k+1}+\cdots + a_nx_n - b_{k+1}y_{k+1}-\cdots - b_my_m. \end{align*}$$ As this is a linear combination of pairwise distinct elements of $X$ equal to $\mathbf{0}$, it follows from the linear independence of $X$ that $a_{k+1}=\cdots=a_n=0$, $b_{k+1}=\cdots=b_m=0$, and $a_1=b_1$, $a_2=b_2,\ldots,a_k=b_k$. That is, the two expressions of $v$ as linear combinations of elements of $X$ differ only in that there are extra summands of the form $0x$ with $x\in X$ in them. QED
Corollary. Let $V$ be a vector space, and let $A$ be a maximal independent subset of $V$. If $W$ is a vector space, and $f\colon A\to W$ is any function, then there exists a unique linear transformation $T\colon V\to W$ such that $T(a)=f(a)$ for each $a\in A$.
Proof. Existence. Given $v\in V$, then $v\in\mathrm{span}{A}$. Therefore, we can express $v$ as a linear combination of elements of $A$, $v = \alpha_1a_1+\cdots+\alpha_na_n$. Define $$T(v) = \alpha_1f(a_1)+\cdots+\alpha_nf(a_n).$$ Note that $T$ is well-defined: if $v = \beta_1b_1+\cdots+\beta_mb_m$ is any other expression of $v$ as a linear combination of elements of $A$, then by the lemma above the two expressions differ only in summands of the form $0x$; but these summands do not affect the value of $T$.
Note also that $T$ is linear, arguing as above. Finally, since $a\in A$ can be expressed as $a=1a$, then $T(a) = 1f(a) = f(a)$, so the restriction of $T$ to $A$ is equal to $f$.
Uniqueness. If $U$ is any linear transformation $V\to W$ such that $U(a)=f(a)$ for all $a\in A$, then for every $v\in V$, write $v=\alpha_1a_1+\cdots+\alpha_na_n$ with $a_i\in A$. Then. $$\begin{align*} U(v) &= U(\alpha_1a_1+\cdots + \alpha_na_n)\\ &= \alpha_1U(a_1) + \cdots + \alpha_n U(a_n)\\ &= \alpha_1f(a_1)+\cdots + \alpha_n f(a_n)\\ &= \alpha_1T(a_1) + \cdots + \alpha_n T(a_n)\\ &= T(\alpha_1a_1+\cdots+\alpha_na_n)\\ &= T(v).\end{align*}$$ Thus, $U=T$. QED
The "this guy" you're looking for is just the function that takes each of your basis vectors and sends them to 1.
Note that this is not in the span of the set of functions that each take a single basis vector to 1, and all others to 0, because the span is defined to be the set of finite linear combinations of basis vectors. And a finite linear combination of things that have finite-dimensional support will still have finite-dimensional support, and thus can't send infinitely many independent vectors all to 1.
You may want to say, "But look! If I add up these infinitely many functions, I clearly get a function that sends all my basis vectors to 1!" But this is actually a very tricky process. What you need is a notion of convergence if you want to add infinitely many things, which isn't always obvious how to define.
In the end, it boils down to a cardinality issue - not of the vector spaces themselves, but of the dimensions. In the example you give, $\mathbb{R}^{<\omega}$ has countably infinite dimension, but the dimension of its dual is uncountable.
(Added, in response to comment below): Think of all the possible ways you can have a function which is 1 on some set of your basis vectors and 0 on the rest. The only ways you can do these and stay in the span of your basis vectors is if you take the value 1 on only finitely many of those vectors. Since your starting space was infinite-dimensional, there's an uncountable number of such functions, and so uncountably many of them lie outside the span of your basis. You can only ever incorporate finitely many of them by "adding" them in one at a time (or even countably many at a time), so you'll never establish the vector isomorphism you're looking for.
Only attempting to address that one point Asaf raised in comments/edits. I refer to the CW answer by Arturo & Bill for the cardinality argument and an actual answer to the original question.
Assume that $A=\{e_i\mid i \in I\}$ is a basis for $V$. Let $f:A\rightarrow F$ be any function. This function can be extended linearly to an element of $V^*$ as follows. An arbitrary element $x\in V$ can be written as a finite linear combination of the basis elements in a unique way $$ x=\sum_{j=i}^nc_j e_{i_j}, $$ where $e_{i_1},e_{i_2},\ldots,e_{i_n}$ are the basis vectors needed to write $x$. This finite subset of $A$ (as well as the natural number $n$) obviously depends on $x$. Anyway, we can define $$ f(x)=\sum_{j=i}^n c_j f(e_{i_j}). $$ As the sum is finite, we do only vector space operations on the r.h.s. (no convergence question or some such). As the presentation of $x$ as a linear combinations of element of $A$ is unique (up to addition of terms with coefficients equal to zero), $f(x)$ is well defined. It is straightforward to check that the mapping $f$ defined in this way is linear, i.e. an element of $V^*$.
What may have been confusing is that we do not require $f$ to have finite support for the above `linear extension' to work as described. The upshot is that we only need to use a finite number of vectors from the basis to write a given vector $x$. IOW the finiteness of the sum in the definition of the linear extension of $f$ comes from $A$ being a basis - not from the support of $f$ (that does not need to be finite).
We can similarly extend a function with singleton support. If $\chi_i:A\rightarrow F$ is the function defined by $e_i\mapsto 1, e_j\mapsto 0$, for all $j\in I, j\neq i$, let's call its linear extension to an element of $V^*$ also $\chi_i$. What's the span of the mappings $\chi_i$? Only those linear functions $f$ with $|A\setminus\ker f|<\infty$ can be written as linear combinations of $\chi_i, i\in I$. Therefore the span of the linear mappings $\chi_i,i\in I$ is not a all of $V^*$ unless $\dim V$ is finite.