Why doesn't the dot product give you the coefficients of the linear combination?

Your $v_1$ and $v_2$ need to be orthonormal. To expand on learnmore's answer, essentially, the reason you need orthogonality for this to work is that if your $v_1$ and $v_2$ are not orthogonal, then they will have a non-zero dot-product $v_1\cdot v_2$. This means that $v_2$ carries some weight "in the direction" $v_1$. Your intuition that $c = a\cdot v_1$ is the "amount of $a$ in the direction $v_1$" is correct - keep that intuition! Similarly, $d=a\cdot v_2$ is the amount of $a$ in the direction of $v_2$.

However - since $v_1$ and $v_2$ are not perpendicular, the number $c$ has "piece" of $v_2$ in it, and the number $d$ has a "piece" of $v_1$ in it. So, when you try to expand $a$ in the basis $\{v_1,v_2\}$, you would need an extra term to compensate for the "non-orthogonal mixing" between $v_1$ and $v_2$.

The technical details are as follows. Since $v_1$ and $v_2$ are linearly independent, we can write

$$ a = \alpha v_1+\beta v_2 $$ for some scalars $\alpha, \beta$. Now, take the dot product of $a$ with $v_1$ and expand it out:

$$ a\cdot v_1 = (\alpha v_1+\beta v_2)\cdot v_1 = \alpha v_1\cdot v_1 + \beta v_1\cdot v_2 = \alpha + \beta v_1\cdot v_2 $$ similarly, expand out $a\cdot v_2$:

$$ a\cdot v_2 = \alpha v_1\cdot v_2 + \beta $$

Those extra terms ($\beta v_1\cdot v_2$ and $\alpha v_1\cdot v_2$) express the non-orthogonality. Written another way, we have

$$ \alpha = a\cdot v_1 - \beta v_1\cdot v_2 $$ and

$$ \beta = a\cdot v_2 - \alpha v_1\cdot v_2 $$ which shows clearly that the correct expansion coefficients have $a\cdot v_j$, but also another piece compensating for the non-orthogonality. I could go on - you can use matrices and such, but hopefully this is enough to convince you.


Your intuition is mostly correct, and you would probably have seen the flaws in your reasoning if you had drawn a picture like this: enter image description here

We have two linearly-independent unit vectors $\mathbf{U}$ and $\mathbf{V}$, and a third vector $\mathbf{W}$ (the green one). We want to write $\mathbf{W}$ as a linear combination of $\mathbf{U}$ and $\mathbf{V}$. The picture shows the projections $(\mathbf{W} \cdot \mathbf{U})\mathbf{U}$ (in red) and $(\mathbf{W} \cdot \mathbf{V})\mathbf{V}$ (in blue). These are the things you call "shadows", and that's a good name. As you can see, when you add them together using the parallelogram rule, you get the black vector, which is obviously not equal to the original vector $\mathbf{W}$. In other words $$ \mathbf{W} \ne (\mathbf{W} \cdot \mathbf{U})\mathbf{U} + (\mathbf{W} \cdot \mathbf{V})\mathbf{V} $$ You certainly can write $\mathbf{W}$ in the form $\mathbf{W} = \alpha\mathbf{U} + \beta\mathbf{V}$, but $\alpha = \mathbf{W} \cdot \mathbf{U}$ and $\beta = \mathbf{W} \cdot \mathbf{V}$ are not the correct coefficients unless $\mathbf{U}$ and $\mathbf{V}$ are orthogonal. And you can even calculate the coefficients $\alpha$ and $\beta$ using dot products, as you expected. It turns out that $$ \mathbf{W} = (\mathbf{W} \cdot \bar{\mathbf{U}})\mathbf{U} + (\mathbf{W} \cdot \bar{\mathbf{V}})\mathbf{V} $$ where $(\bar{\mathbf{U}}, \bar{\mathbf{V}})$ is the so-called dual basis of $(\mathbf{U}, \mathbf{V})$. You can learn more here.


What you are thinking is correct in terms of orthonormal basis:

Suppose that $\{(e_i):i\in I\}$ is an orthonormal basis of any vector space $V$ then any vector $x$ can be expressed as $x=\sum _{i=1}^n c_ie_i$

In order to get the $c_j;j=1,2,...n$ we can use the fact that :

$ \langle x,e_j\rangle =\langle \sum _{i=1}^n c_ie_i,e_j\rangle =\sum _{i=1}^n c_i\langle e_i,e_j\rangle=c_j;j=1,2,...,n$