Measurement in 't Hooft Cellular Automation Interpretation (CAI)
Short answer: the situation you describe corresponds, in regular quantum mechanics, to an "imprecise" measurement, i.e. not to a projection onto a single state, but to a projection onto a multidimensional subspace of the Hilbert space.
The ontological states correspond, in regular quantum mechanics, to some specific quantum states (which are assumed to be the "true" possible states of the system, as opposed to all other quantum states). So what would the probability to end up in one of several ontological states correspond to?
In quantum mechanics we describe classical outcome with some pure state $\langle X \rvert$ and just compute one projection $\langle X | \psi \rangle$.
We only do that if the measurement procedure is "precise", i.e. it's able to distinguish that specific pure state from all other states. But the situation with the "macrostate" you describe is different, it corresponds to a "coarse" measurement procedure, which regular quantum mechanics is also able to handle easily.
So how does it handle calculating the probability to end up in one of many possible states? Simple, we just project our state $| \psi \rangle$ onto the subspace spanned by all of those states. And the rest is as before - whether the measurement is precise or coarse, the probability is obtained by squaring the length of the projection.
The result is that the total probability is equal to the sum of the probabilities for all of the individual (orthonormal) states that our measurement procedure can't distinguish:
$\sum_{i=1}^N \lvert \langle a_i | \psi \rangle \rvert^2$,
which is of course the exact formula from CAI you wanted to recover.
The previous answers are basically correct. In the CA interpretation you use the rules for computing something exactly as in “real” quantum mechanics; you may do exactly the same unitary transformations, go from one basis to another, solve the Schroedinger equation there, etc. The only difference is that the CA interpretation assumes the existence of one particular basis that represents the true world. If you stick to that basis, the Schroedinger equation never produces superpositions, it will always bring you back to elements of the basis states, if that’s where you started. This ensures that, in this very special basis, everything happens as in classical mechanics. In many simple quantum models, one can identify such a basis, often there are many possible different choices. And then the next step is to assume that one of these basis sets represents ‘reality’.
“Real" quantum mechanics reappears as soon as you make unitary transformations to other basis sets that seem to be more suitable for calculations, although these basis sets do not represent single real states anymore. For doing statistics, this is just fine, even if you don’t see the fine structure of reality anymore; just as in a postcard picture of a beach where you don’t see the sand grains anymore.
One twist added to the story later is that you should accommodate for the notion of energy in this picture, where energy just stands for the frequencies of some ultra-fast jittering variables. Their motion is so fast (corresponding to many TeV’s) that these energetic subsystems are always close to their lowest energy eigenstate. This is how they decouple, in practice, but they may still have effects on the slower, visible, variables. These then can develop observable properties that seem not to commute with one another (as can be checked by simple model calculations), and this may well be the way by which non-commuting observable operators enter into the picture.