Find a basis $A$ of space $\mathbb R^{4}$ and basis $B$ of space $\mathbb R^{3}$

You are guessing. There is no need to.

If the required bases are, respectively $\{v_1,v_2,v_3,v_4\}$ and $\{w_1,w_2,w_3\}$, then the condition for the matrix translates into $$ \varphi(v_1)=w_1,\quad \varphi(v_2)=w_2,\quad \varphi(v_3)=0,\quad \varphi(v_4)=0 $$ The idea is then to complete a basis of the kernel, which you performed correctly.

Since $\ker\phi$ is the set of vectors satisfying \begin{cases} x_{1}+x_{3}+x_{4}=0 \\ x_{1}+x_{2}+2x_{3}+3x_{4}=0\\ x_{1}-x_{2}-x_{4}=0 \end{cases} you can do elimination $$ \begin{bmatrix} 1 & 0 & 1 & 1 \\ 1 & 1 & 2 & 3 \\ 1 & -1 & 0 & -1 \end{bmatrix}\to \begin{bmatrix} 1 & 1 & 0 & 1 \\ 0 & 1 & 1 & 2 \\ 0 & -1 & -1 & -2 \end{bmatrix}\to \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{bmatrix} $$ so a basis of $\ker\varphi$ is given by $v_3=(-1,-1,1,0)$, $v_4=(-1,-2,0,1)$. Now complete to a basis of $\mathbb{R}^4$ finding the null space of $$ \begin{bmatrix} -1 & -1 & 1 & 0 \\ -1 & -2 & 0 & 1 \end{bmatrix} $$ The elimination yields $$ \to \begin{bmatrix} 1 & 1 & -1 & 0 \\ 0 & -1 & -1 & 1 \end{bmatrix} \to \begin{bmatrix} 1 & 1 & -1 & 0 \\ 0 & 1 & 1 & -1 \end{bmatrix} \to \begin{bmatrix} 1 & 0 & -2 & 1 \\ 0 & 1 & 1 & -1 \end{bmatrix} $$ which provides $v_1=(2,-1,1,0)$ and $v_2=(-1,1,0,1)$. Note that, by construction, $\{\varphi(v_1),\varphi(v_2)\}$ is linearly independent. Now take $$ w_1=\varphi(v_1)=(3,3,2)\qquad w_2=\varphi(v_2)=(0,3,-3) $$ and complete this to a basis of $\mathbb{R}^3$ finding the null space of \begin{bmatrix} 3 & 3 & 2 \\ 0 & 3 & -3 \end{bmatrix} Elimination: $$ \to \begin{bmatrix} 1 & 1 & 2/3 \\ 0 & 1 & -1 \end{bmatrix} \to \begin{bmatrix} 1 & 0 & 5/3 \\ 0 & 1 & -1 \end{bmatrix} $$ so the vector you need is $w_3=(-5/3,1,1)$.


$\beta_{3}$ I can choose anyway because I do not have any dependencies on it. It can be linerly indipendent from other basis vectors, so it can be $\beta=(0,0,1)$

Perhaps you mean it well, but you don't want "$\beta_{3}$ (...) can be linearly independent (...)" but rather has to be because otherwise, $B$ wouldn't be a basis. So you are right that you can freely choose $\beta_{3}$, as long as it's linearly independent from the first two elements of $B$.

So $\ker \varphi=lin\left\{(-1,-1,1,0),(-1,-2,0,1)\right\}$. This vectors are in a basis $A$ but I need two more vectors.

$\dim(\ker\varphi)=2$ so $\dim(im \varphi)=2$ and it can be: $(1,1,1),(0,1,-1)$ (I take $2$ linearly independent vectors from the matrix $M(\varphi)^{st}_{st}$ columns)

I would do this differently, but perhaps because I'm not sure if I follow your reasoning.

Based on the kernel, you already have the last two (blue) elements in $A=\left\{\alpha_1,\alpha_2,\color{blue}{\alpha_3},\color{blue}{\alpha_4}\right\}$, so now you can simply extend to a full basis of $\mathbb{R^4}$ by picking any $\alpha_1$ and $\alpha_2$, as long as all four are linearly independent. The first two standard basis vectors work, so pick e.g. $\alpha_1=(1,0,0,0)$ and $\alpha_2=(0,1,0,0)$. The required form of the matrix is then automatically satisfied if you pick $\beta_1 = \varphi\left(\alpha_1\right)$ and $\beta_2 = \varphi\left(\alpha_2\right)$ and then add $\beta_3$ as described above.

I have the feeling your approach is the other way around: finding suitable $\alpha_1$ and $\alpha_2$ for $A$ to match earlier picked $\beta_1$ and $\beta_2$ in $B$.


Thumbs up for the well documented question, showing your own work.