Expected value of $\log(\det(AA^T))$
I don't see how this expectation can be anything other than $-\infty$. The probability that $A$ is singular is not going to vanish. A very lazy lower bound is the probability that row $1 = \pm \text{row} 2$:
$$P(\det(A)=0) \geq \frac{1}{2^{n-1}}$$
$$E(\log(|\det(A)|)< (-\infty)(2^{1-n})=-\infty$$
In 2010, the exact estimate was not known. It is conjectured that for $\{0,1\}$ matrices, $E(|\det(A)|)\sim Ce^{-n/2}n^{n/2}$ with $C\approx \sqrt{2}e^{1/4}$.
It is not difficult to deduce the result for $\{-1,1\}$ matrices. The correspondence is as follows: $\det(B_{n+1})=2^n\det(A_n)$ where $A_n\in M_n(0,1)$ and $B_{n+1}\in M_{n+1}(-1,1)$.
A good specialist in this area is W. Orrick.
For finite $n$, the expected value of the log determinant is infinite as noted by D.A.N. because the matrix is with positive probability singular. Instead, you can talk about things like the second moment of the determinant, the limiting distribution of the log determinant, and tail bounds saying you're unlikely to be far from it.
Second Moment: It turns out (as was first observed in the 1940s by Turan) that $E(\det(A)^2)$ is much easier to analyze than $E(|\det A|)$. This is because \begin{eqnarray*} E(\det(A)^2) &=& E(\sum_{\sigma} \sum_\tau \prod_{i=1}^n (-1)^{sgn (\sigma) + sgn(\tau)} a_{i,\sigma(i)} a_{i, \tau(i)})\\ &=& \sum_{\sigma} \sum_{\tau} (-1)^{sgn (\sigma) + sgn(\tau)} E( \prod_{i=1}^n a_{i, \sigma(i)} a_{i, \tau(i)}) \end{eqnarray*} If $\sigma \neq \tau$, then somewhere in that product there's a variable that appears exactly once. It has mean $0$, so the entire product has expectation $0$. If on the other hand $\sigma=\tau$, then every term in the product is equal to $1$. There's $n!$ choices for $\sigma$, so we have $$E (\det (A^2) ) = n!$$
Limiting Distribution: The asymptotic distribution of the log determinant is known to be Gaussian, in the sense that for any finite $t$, we have $$P\left( \frac{ \log \det (A A^T ) - \log ( (n-1)! ) }{\sqrt{ 2\log n}} < t \right) \rightarrow \Phi(t)$$
This was originally published by Girko in 1997, but Girko's proof is opaque and seems to skirt some technical details along the way. Later results of Nguyen and Vu and Bao, Pan, and Zhou fill in the gaps and give a more transparent proof.
One curious thing here is that the Gaussian distribution is centered around $(n-1)!$ instead of $n!$. It follows from the central limit theorems that the determinant of $AA^T$ is with high probability $(n-1)! e^{O(\sqrt{ \log n} )}$. But as we saw above, $E(\det(AA^T))=n!$, which lies outside this interval! The main contribution to the expectation comes from the far upper tail of the determinant distribution.
Tail Bounds The Nguyen and Vu paper gives a bound on the rate of convergence that says that the difference between the two sides in the above equation is at most $\log^{-1/3+o(1)} n$. However, this doesn't give a very strong bound on the probability the determinant is very far from the mean. In this range one slightly stronger bound is due to Vu and myself, who showed that for any $B>0$ there is a $C>0$ such that $$P(|\log \det(AA^T) - \log n!|)> C n^{2/3} \log n) \leq n^{-B}$$ This bound is probably very far from optimal -- the limiting distribution above had a scaling window proportional to $\sqrt{\log n}$, but now we're looking at deviations on order of $n^{2/3}$. I actually suspect that either it should be possible to extract a stronger large deviation result from the proofs of the above central limit theorems, or that someone has already proven such a result, but I don't know of one offhand. Döring and Eichelsbacher give a much stronger bound on the tail in the case where the entries are iid Gaussian instead of $\pm 1$ (see the last section of their paper).