Under what conditions is a linear automorphism an isometry of some inner product?
Hint If $T$ is an isometry of the inner product $(x, y) \mapsto \langle x, y \rangle$, then for any $P \in GL(V)$, $P^{-1} T P$ is an isometry of the inner product $(x, y) \mapsto \langle P x, P y \rangle$ (it is not hard to verify that this indeed defines an inner product, but you may wish to prove it anyway): Thus, the property that a given transformation admit such an inner product is invariant under similarity (that is, is an invariant of the conjugacy class of $T$ in $GL(V)$).
On the other hand, we already know canonical representatives of each similar class in $GL(V)$: These are given by the analogue of the Jordan normal form for real matrices. These matrices are relatively simple and so one can check more directly for a general matrix of this form whether such an inner product exists.
Instead of attacking this immediately, you might like to prove first the following lemmata:
One can essentially treat each "Jordan block" separately, or more precisely, given a direct sum decomposition $V = \oplus V_a$ and linear transformations $T_a : V_a \to V_a$, then there is an inner product on $V$ preserved by $T := \oplus T_a$ iff for each $a$ there is an inner product on $V_a$ preserved by $T_a$.
$T$ cannot have any nonsimple "Jordan blocks", that is, $T$ is block diagonal, where each block has the form $$\phantom{(\ast)} \qquad \begin{pmatrix} \lambda \end{pmatrix} \qquad \text{or} \qquad \begin{pmatrix} \alpha & -\beta \\ \beta & \alpha\end{pmatrix} \qquad (\ast).$$
The proof of (2) is essentially the one you give for your counterexample $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}.$
With these two facts in hand, it's essentially enough to solve the problem for the two transformations in $(\ast)$ above. Since the sole eigenvalue of $\begin{pmatrix} \lambda \end{pmatrix}$ is $\lambda$, by the first observation in the question a sufficient and necessary condition for this block is $\lambda \in \{\pm 1\}$.
Curious that the simple necessary and sufficient condition has not been (it would seem to me) clearly mentioned in any of the answers so far:
The linear operator $T$ preserves some inner product on $V$ if and only if $V$ admits a basis for which the matrix of $T$ is orthogonal (in other words the matrix of $T$ on an arbitrary basis is similar to an orthogonal matrix). The occurs if and only if the complexification of $T$ is diagonalisable, and all its (complex) eigenvalues have absolute value$~1$.
For the "if" of the first sentence, it suffices to define the inner product by the formula for the standard inner product of $\Bbb R^n$ in terms of the coordinates with respect to that basis; for the converse (only if) just take any orthonormal basis for the inner product. That the conditions in the second sentence are necessary is because these are well-known properties of orthogonal matrices.
That they are sufficient, in other words that a real matrix$~A$ that is diagonalisable over $\Bbb C$ with all its eigenvalues of absolute value$~1$ is similar (over$~\Bbb R$) to an orthogonal matrix is also standard, but the argument is slightly more involved. Since the complex-linear operator$~\phi$ on$~\Bbb C^n$ defined by$~A$ commutes with the (real-linear) operation$~J$ that performs complex conjugation of all coordinates, one easily shows that a basis of eigenvectors for$~\phi$ can be chosen such that as a set it is stable under$~J$: eigenvectors for real eigenvalues are chosen to be $J$-fixed (have real coordinates), and eigenvectors in the basis for non-real eigenvalues to come in pairs interchanged by$~J$, whose eigenvalues are complex conjugates of each other. This gives a decomposition of$~\Bbb C^n$ into a direct sum of $J$-stable subspaces of (complex) dimension $1$ or$~2$, each of which is the complexification of its real subspace of $J$-fixed vectors. These define a decomposition of (the original real vectors space) $V$ into $T$-stable subspaces of (real) dimension $1$ or$~2$, and it suffices to show that each of these subspaces has a basis for which the matrix of the restriction of$~T$ to the subspace is orthogonal. In dimension$~1$ this is trivial (the restriction is scalar multiplication by $1$ or by$~{-}1$) while in dimension$~2$ one can take a basis consisting of the common "real part" of the pair of complex eigenvectors and of the common (up to sign) "imaginary part", for which the $2\times2$ matrix will be that of a rotation, hence orthogonal.