Why dividing by zero still works
The division your teacher did is polynomial division. He did not divide by zero; he divided by $x^2 - 4x + 5$.
The long division he did is just an algorithm that allows you to get the following identity:
$$x^3 - 3x^2 + 2x - 1 = (x^2 - 4x + 5)(x+1) + x-6$$
This is an identity involving multiplication, not division. Now, when you plug in $x = 2 + i$, you're not dividing by zero; you're multiplying by zero, which you should agree is allowed.
The division was a division in the polynomial ring $\mathbb{C}[x]$ by a non-zero polynomial
For example: by dividing $$x^{2}+2x+1$$ by $$x+1$$ which is a non-zero polynomial we get $$(x+1)(x+1)=x^{2}+2x+1$$ since the quotient is $x+1$ and the reminder is $0$. The result is an equality of polynomials, and it is valid if you set any element in the field in those polynomials, even $x=-1$.
$\begin{eqnarray}{\bf Hint}\quad&& f(x) = r(x) + q(x)\, \color{#c00}{g(x)}\quad \text{[divide } f\ \text{ by }\ g\ \text{ with quotient }\, q, \text{ remainder }\, r\,]\\ \stackrel{\large \color{#c00}{g(a)\,=\,0}}\Rightarrow && f(a) = r(a)\quad \text{by evaluation at } x = a,\text{ using }\ \color{#c00}{g(a) = 0}. \end{eqnarray}$
Because we wrote the "division" as $\ f = q\,\color{#c00}g + r,\,$ not as $\smash[b]{\, \dfrac{f}{\color{#c00}g} = q + \dfrac{r}{\color{#c00}g},\,}$ there is no division by $\,0\,$ when we evaluate at a root of $\,\color{#c00}g.$
Your question has $\,g\,$ quadratic, with root $\,a = 2+i.\,$ The simpler linear case is well-known.
For linear $\,g = x\! -\!a\,$ we have $\, r = f(a),\,$ i.e. $\,f(x)\equiv f(r)\pmod{\!x\!-\!a}\ \ $ [Remainder Theorem]. $\ $
Said equivalently $\ x\!-\!a\mid f(x)\!-\!f(a)\ \ $ [Factor Theorem]
For further insight, and a nontrivial application, see the Heaviside cover-up method for evaluating partial fraction decompositions. This does explicitly involve fractions, and, as such, the circumvention of division by zero is more explicit.
More explicitly, we can use division to calculate polynomial derivatives purely algebraically
$$\begin{eqnarray} f'(a) &=\,& \dfrac{f(x)-f(a)}{x-a}\Bigg|_{\large\, x\,=\,a}\\ \\ {\rm i.e.}\ \ \ f'(a) &=\,& q(a)\ \ {\rm where}\ \ f(x)-f(a) = q(x)(x-a)\end{eqnarray}$$
There is no division by zero because the prior linear equation defines a unique polynomial $\,q(x),\,$ and polynomials have no sinularities, i.e. they can be evaluated at any point. This leads to purely algebraic proofs of familiar analytic results, e.g. the double-root test for polynomials.
For a more extreme example of circumvention of division by $\,0,\,$ see this discussion of a proof of Sylvester's determinant identity. Even some professors have mistakenly thought that this proof involves division by zero (a strong testament to the gaps in the exposition of the universal properties of polynomials in many algebra courses). But here, as in the prior examples, there is, in fact, no division by zero, because, before evaluation, one performs valid polynomial operations that eliminate (apparent) singularities. Thus to gain the full universal algebraic power of formal polynomials, one has to (temporarily) forget their anayltic view (as functions). This is easier said than done, since the analytic (functional) bias is so strongly ingrained in our intution.