Bayesian Inference in Measure Theory
A classic book on Bayesian statistics, that makes (modest) use of measure theory is Optimal Statistical Decisions by Morris DeGroot.
Of course, Bayes rule holds even in the framework of measure theoretic probability. But for more general treatments of probabilistic conditioning, there is the very abstract framework for conditional expectations due to Kolmogorov based on the Radon-Nikodym theorem. Since the probability of an event equals the expectation of its indicator function, one can use this framework to treat conditional probabilities. More concrete, but less general, is working with regular conditional probabilities. Something these approaches are not going to help you with is conditioning on probability zero events, they do not matter in classical probability theory and they do not matter in Bayesian statistics.
Probability zero events do matter a lot in game theory, where the show up as off-equilibrium-beliefs. But the theory of refinements for Bayesian games with infinite type spaces is not yet satisfactory. For simple Bayesian equilibrium, the ability to build expectations ex ante is enough. The classical treatment of the issue can be found in the 1985 paper Distributional strategies for games with incomplete information by Milgrom and Roberts (in Mathematics of OR). The paper makes great use of the theory of weak convergence, which is quite important in mathematical game theory.