Zero vector of a vector space
Here's an example. Let $V$ be the set of all $n$-tuples of strictly positive numbers $x_1,\ldots,x_n$ satisfying $x_1+\cdots+x_n=1$. Define "addition" of such vectors by
$$ (x_1,\ldots,x_n) \mathbin{\text{“}{+}\text{''}} (y_1,\ldots,y_n) = \frac{(x_1 y_1,\ldots,x_n y_n)}{x_1 y_1 + \cdots + x_n y_n }. $$
This is a vector space whose zero element is $$ \left( \frac 1 n , \ldots, \frac 1 n \right). $$ The additive inverse of $(x_1,\ldots,x_n)$ is $$ \frac{\left( \dfrac 1 {x_1}, \ldots, \dfrac 1 {x_n} \right)}{\dfrac 1 {x_a} + \cdots + \dfrac 1 {x_n}}. $$ This operation is involved in a basic identity on conditional probabilities: $$ (\Pr(A_1),\ldots,\Pr(A_n)) \mathbin{\text{“}{+}\text{''}} k\cdot(\Pr(D\mid A_1),\ldots,\Pr(D\mid A_n)) = (\Pr(A_1\mid D),\ldots,\Pr(A_n\mid D)) $$ where $k$ is whatever it takes to make the sum of the entries $1$. However, in practice, one wouldn't bother with $k$; just multiply term by term and then normalize.
Here's a more down-to-earth example. Look at $\mathbb R^3$ and say you want to put the zero point at $\vec p = (2,3,7)$. Then define "addition" as follows: $$ \vec a \mathbin{\text{“}{+}\text{''}} \vec b = \underbrace{\vec p + (\vec a - \vec p) + (\vec b - \vec p)}_{\begin{smallmatrix} \text{These are the usual} \\ \text{addition and subtraction.} \end{smallmatrix}}. $$
Michael Hardy provides a very good answer. I want to explain what's so exceptional about it.
If you have a vector space (let's say finite dimensional), once you choose a basis for that vector space, and once you represent vectors in that basis, the zero vector will always be $(0,0,\ldots,0)$. Of course, the coordinates here are with respect to that basis.
We usually describe elements of $\mathbb R^n$ using coordinates that are of course the coordinates of the most obvious basis of $\mathbb R^n$. And the same for any subspace. So this question doesn't come up there.
The exotic examples only happen when you use coordinates that are not really indigenous to the vector space. The coordinates may have some interesting mathematical structure, but one structure they will not have is the structure of the vector space they are representing. Calling them "coordinates" is almost a lie, since they don't act like vector space coordinates at all, for instance, $(0,0,\ldots,0)$ is not the zero vector.
In linear algebra textbooks one sometimes encounters the example $V = (0, \infty)$, the set of positive reals, with "addition" defined by $$ u \oplus v = uv $$ and "scalar multiplication" defined by $$ c \odot u = u^{c}. $$ It's straightforward to show $(V, \oplus, \odot)$ is a vector space, but the zero vector (i.e., the identity element for $\oplus$) is $1$.
(The pleasure "relabeling" this example to look like a more familiar space is left as an exercise.)