Intuition for Formal Definition of Linear Independence

Imagine you have a collection of arrows pointing in various directions. If they're linearly dependent, then you can stretch, shrink, and reverse (but not rotate) them in such a way that if you lay them head-to-tail then they form a closed loop. For example, if you have three arrows that happen to all lie in the same plane (linearly dependent), then you can form a triangle out of them, but you can't if one of them sticks out of the plane formed by the other two (linearly independent).


Before we grapple with linear independence, it might be best to figure out what linear dependence means first.

When I think of the phrase "linear dependence" with regards to a set of vectors, what comes to mind for me is that, in some sense, one of those vectors "depends" on the other vectors in the set. The mathematical formalization for this dependence is:

A set of nonzero vectors $\{\mathbf{v}_i \}$ in a vector space $V$ over a field $F$ is called linearly dependent when there exists$^\dagger$ a $k$ so that one can write $\displaystyle \mathbf{v}_k = \sum_{n \neq k} c_n \mathbf{v}_n$, where $c_n \in F$.

So we say $\mathbf{v}_k$ "depends" on the other vectors: for any $d \in \mathbb{R}$, one can arrive at the point $d\mathbf{v}_k$ simply by travelling some distance in each of the other directions$^\ddagger$ $\mathbf{v}_{i \neq k}$. Note that, because $\mathbf{v}_k$ is a nonzero vector, we must have $c_n \neq 0$ for at least one $n$. Next, we can rewrite the above as $\mathbf{v}_k - \displaystyle \sum_{n \neq k} c_n \mathbf{v}_n = 0$.

From this, it is easy to deduce that a set of vectors is linearly dependent $\iff$ we can find a set of constants $\{c_n\}$, not all zero, so that $\displaystyle \sum_n c_n \mathbf{v}_n = 0$. This statement is equivalent to our definition for linear independence:

A set of vectors $\{\mathbf{v}_i \}$ is called linearly independent $\iff$ we cannot find a set of constants $\{c_n\}$, not all zero, so that $\displaystyle \sum_n c_n \mathbf{v}_n = 0$. This is to say, $c_n = 0$ for all $n$ is the only solution to this equation.


$^\dagger$ This $k$ is never unique.


$^\ddagger$ To give a concrete example, let $\displaystyle \mathbf{v}_1 = \left[1 \atop 0 \right], \mathbf{v}_2 = \left[0 \atop 1\right]$, and $\displaystyle \mathbf{v}_3 = \left[1 \atop 1\right]$ in $\mathbb{R}^2$. See that $\mathbf{v}_3$ depends on $\mathbf{v}_1$ and $\mathbf{v}_2$ since you can get to the point $d \mathbf{v}_3$ for any $d \in \mathbb{R}$ by first travelling $d$ units in the $\mathbf{v}_1$ direction and then $d$ units in the $\mathbf{v}_2$ direction.


You have linear dependence when a vector can be expressed as a linear combination of the others in the set. Dependence in this sense is like the $y$ in $y=f(x)$, meaning that the $y$ depends on the values of $x$ and is determined by it through $f$. This happens when you can write a linear combination of the vectors that equals to zero, since you can move one of the vectors on the other side and divide by $-a_{i}$. Of course the coefficient must not be all zeroes otherwise you would just have $0=0$.