Why is eigendecomposition $V \Lambda V^{-1}$ not $V^{-1} \Lambda V$
It all depends on what you define your coordinate transformation matrix $V$ to be; obviously if you replace it by the inverse matrix (which carries the same information) then the two possible formulae for the diagonalisation are interchanged. Now typically people take $V$ to be matrix whose columns contain the coordinates of a chosen basis of eigenvectors, the coordinates being expressed of course in terms of the basis for which the matrix $A$ was originally expressed. And it is a sad fact of life that multiplying by that matrix will perform the coordinate transformation in the opposite sense, in other words convert a vector expressed in coordinates on the basis of eigenvectors to its expression in the original basis. Think of it: if you apply $V$ to a standard basis vector, the result is a column of $V$, and therefore will express an eigenvector (one whose coordinates with respect to the eigenvector basis are given by that standard basis vector) in coordinates with respect to the original basis.
The formula was generated from the equation $AV=V\Lambda$ which is a compact way of presentation a set of formulas $Av_i=\lambda_i{v_i}$ for eigenvectors.
In this case matrix $\Lambda$ is scaling column vectors ${v_i}$ grouped in the matrix $V=[v_1 \ \ v_2 \ \dots \ \ v_n]$.