Do coordinate components transform in the same or opposite way as their bases?
I think using charts helps to keep the ideas clear, because then it's just a change of basis calculation. More precisely, let $\varphi:\mathbb R^2\to \mathbb R^2$ by $\varphi(r,\theta)=(r\cos\theta,r\sin\theta)$ and $\psi:\mathbb R^2\to \mathbb R^2$ by $\psi=id$, the identity on $\mathbb R^2.$
For any $p\in \mathbb R^2$, the tangent space $T_p\mathbb R^2$ has basis $\{\frac{\partial}{\partial r},\frac{\partial}{\partial \theta}\}$ in the coordinates given by $\varphi$ and $\{\frac{\partial}{\partial x},\frac{\partial}{\partial y}\}$ in the coordinates of $\psi$, so that $x$ and $y$ are the coordinate projections on the first and second coordinate, respectively.
To see how the components of vectors transform, write
$\frac{\partial}{\partial r}=a\frac{\partial}{\partial x}+b\frac{\partial}{\partial y}.$ Applying the projections $x$ and $y$, we get $a=\cos\theta$ and $b=\sin \theta.$
Similarly, $\frac{\partial}{\partial \theta}=c\frac{\partial}{\partial x}+d\frac{\partial}{\partial y}$ and $c=-r\sin\theta$ and $d=r\cos\theta.$
Therefore, if the vector in the $\varphi$ system has coordinates $(u,v)$ then the coordinates in the $\psi$ system are
$$\begin{pmatrix} \cos\theta &-r\sin\theta \\ \sin\theta&r\cos\theta \end{pmatrix}\begin{pmatrix} u\\v \end{pmatrix}$$
and the matrix of the transformation is displayed explicitly.
You are discovering the wonders of the contra-gredient transformation. If you write a vector and its basis representation as $$ v=(e_1,...,e_n)\pmatrix{v^1\\\vdots\\v^n} $$ then you can insert $I=M^{-1}M$ in the middle to get that the transformation matrix of the basis tuple is the inverse of the transformation matrix of the coordinate vector.
Now if you arrange the basis tuple also formally as column vector, the transpose of the row above, then the corresponding transformation matrix is the transpose of the inverse, $M^{-\top}$. For orthogonal matrices this is, by definition, the original matrix $M$.
One has to be careful to distinguish the transformation of the vectors by a linear map on the vector space from transformations of the basis tuple, like $$ \phi(v)=(\phi(e_1),...,\phi(e_n))\pmatrix{v^1\\\vdots\\v^n} =(e_1,...,e_n)M_\phi\pmatrix{v^1\\\vdots\\v^n}. $$ or $(v')^i=(M_\phi)^i_jv^j$, which is also valid for isomorphism that are interpreted as coordinate changes.
In that sense, it is ambiguous to write $(A')^i=M^{-1}A^i$, as in your context this connects a column vector map with linear functionals, which do not have a natural connection. What you probably mean is $(A')^i=(M^{-1})^i_jA^j$ with the implied summation.