Will $2$ linear equations with $2$ unknowns always have a solution?
Each linear equation represents a line in the plane. Most of the time two lines will intersect in one point, which is the simultaneous solution you seek. If the two lines have exactly the same slope, they may not meet so there is no solution or they may be the same line and all the points on the line are solutions. When you add a third equation into the mix, that is another line. It is unlikely to go through the point that solves the first two equations, but it might.
There are three possible cases for $2$ linear equations with $2$ unknowns (slope and intercept):
$\qquad$ $\mathbf{0}$ solution points $\qquad$ $\qquad$ $\mathbf{1}$ solution point $\qquad$ $\qquad$ $\mathbf{\infty}$ solution points
$\qquad \quad$ $\nexists$ no existence $\qquad$ $\qquad$ $\exists !$ uniqueness $\qquad$ $\qquad$ $\exists$ no uniqueness
The lines have the form $y(x) = mx + b$.
Case 1: parallel lines
A solution does not exist.
The lines are parallel: they have the same slope.
$$ % \begin{align} % y_{1}(x) &= m x + b_{1} \\ % y_{2}(x) &= m x + b_{2} \\ % \end{align} % $$
Case 2: intersecting lines
We have existence and uniqueness.
The slopes are distinct.
$$m_{1} \ne m_{2}$$
$$ % \begin{align} % y_{1}(x) &= m_{1} x + b_{1} \\ % y_{2}(x) &= m_{2} x + b_{2} \\ % \end{align} % $$
Case 3: coincident lines
We have existence, but not uniqueness. There is an infinite number of solutions. Every point solves the system of equations.
Both lines are the same.
$$ % \begin{align} % y_{1}(x) &= m x + b \\ % y_{2}(x) &= m x + b \\ % \end{align} % $$
In terms of linear algebra, look at the problem in terms of $\color{blue}{range}$ and $\color{red}{null}$ spaces.
The linear system for two equations is $$ % \begin{align} % m_{1} x - y &= b_{1} \\ % m_{2} x - y &= b_{1} \\ % \end{align} $$ which has the matrix form $$ % \begin{align} % \mathbf{A} x &= b \\ % \left[ \begin{array}{cc} m_{1} & -1 \\ m_{2} & -1 \\ \end{array} \right] % \left[ \begin{array}{cc} x \\ y \\ \end{array} \right] % &= % \left[ \begin{array}{cc} b_{1} \\ b_{2} \\ \end{array} \right] % \end{align} % $$
The Fundamental Theorem provides a natural framework for classifying data and solutions.
Fundamental Theorem of Linear Algebra
A matrix $\mathbf{A} \in \mathbb{C}^{m\times n}_{\rho}$ induces for fundamental subspaces: $$ \begin{align} % \mathbf{C}^{n} = \color{blue}{\mathcal{R} \left( \mathbf{A}^{*} \right)} \oplus \color{red}{\mathcal{N} \left( \mathbf{A} \right)} \\ % \mathbf{C}^{m} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)} % \end{align} $$
Case 1: No existence
The matrix $\mathbf{A}$ has a rank defect $(m_{1} = m_{2})$ and $b_{1} \ne b_{2}$. $$ b = \color{blue}{b_{\mathcal{R}}} + \color{red}{b_{\mathcal{N}}} $$ It is the $\color{red}{null}$ space component which precludes direct solution. (Interestingly enough, there is a least squares solution.) $$
The data vector $b$ is not a combination of the columns of $\mathbf{A}$. The column space is $$ \mathbf{C}^{2} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)} $$ The decomposition is $$ \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} = % \text{span } \left\{ \, \color{blue}{ \left[ \begin{array}{c} m \\ -1 \end{array} \right] } \, \right\} \qquad \color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)} = % \text{span } \left\{ \, \color{red}{ \left[ \begin{array}{r} -1 \\ m \end{array} \right] } \, \right\} $$
Case 2: Existence and uniqueness
The matrix $\mathbf{A}$ has full rank $(m_{1}\ne m_{2})$. The data vector is entirely in the $\color{blue}{range}$ space $\color{blue}{\mathcal{R} \left( \mathbf{A} \right)}$ $$ b = \color{blue}{b_{\mathcal{R}}} $$
The $\color{red}{null}$ space is trivial: $\color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)}=\mathbf{0}$. $$ \mathbf{C}^{2} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} $$ The decomposition is $$ \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} = % \text{span } \left\{ \, \color{blue}{ \left[ \begin{array}{c} m_{1} \\ -1 \end{array} \right] }, \, \color{blue}{ \left[ \begin{array}{c} m_{2} \\ -1 \end{array} \right] } \right\} $$
Case 3: Existence, no uniqueness
The matrix $\mathbf{A}$ has a rank defect $(m_{1} = m_{2} = m)$, yet $b_{1} = b_{2}$. $$ b = \color{blue}{b_{\mathcal{R}}} $$ The column space is has $\color{blue}{range}$ and $\color{red}{null}$ space components: $$ \mathbf{C}^{2} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)} $$ The decomposition is $$ \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} = % \text{span } \left\{ \, \color{blue}{ \left[ \begin{array}{c} m \\ -1 \end{array} \right] } \, \right\} \qquad \color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)} = % \text{span } \left\{ \, \color{red}{ \left[ \begin{array}{r} -1 \\ m \end{array} \right] } \, \right\} $$
Postscript: the theoretical foundations here are useful. The trip to understanding starts with simple examples like in @Nick's comment.
Let's think using vector notation.
A linear system with two unknowns $x$ and $y$, and two equations $$ \begin{align*} v_1 x + w_1 y &= a_1 \\ v_2 x + w_2 y &= a_2 \end{align*} $$ can be written in vector notation as $$ x\, \vec{v} + y\, \vec{w} = \vec{a}. $$ That is, you want to know if $\vec{a}$ can be written as a linear combination of $\vec{v}$ and $\vec{w}$.
Fixed the vectors $\vec{v}$ and $\vec{w}$, to state that a solution always exists whatever $\vec{a}$ is, is the same as to state that $\vec{v}$ and $\vec{w}$ spans the whole plane. If not ($\vec{v}$ and $\vec{w}$ are parallel), depending on $\vec{a}$, the solution might not exist. And when it exists, it will not be unique.