Uniform measure on the rationals between 0 and 1
[Comment promoted to answer at request of OP]
No countably infinite set supports a uniform measure (other than the measure where every set has measure zero, and the one where each element has the same non-zero measure and the whole set has infinite measure). Drawing a rational uniformly at random is as impossible as drawing an integer uniformly at random.
Perhaps you should read the recent work of Professor Aerts and Redei on the classical interpretation of probability where to avoid issues either with the classical interpretation (bertrands paradox) or countable additivity, t ascribe something a uniform probability measure over an infinite number of atomic/elementary events .
Miklos Redei, Z Gyenis makes use of something like the Haar measure; see https://www.google.com.au/search?q=redei+bertrands+paradox&ie=utf-8&oe=utf-8&client=firefox-b&gfe_rd=cr&ei=znUIWcqeH4br8weporHICA
see also seehttp://link.springer.com/chapter/10.1007%2F978-3-319-23015-3_20
They use this to dicuss the classical interpretation of probability and the issues that arise with a uniform measure and the principal of indifference with regard to bertrands paradox, or otherwise countably addivitity and normalizatiom.
Its a bit on issue to satisfy both to preserving a
(A) uniform discrete measure (principal of indifference) (measure) invariance,
(B)Whilst also avoiding what is a violation of called 'labelling irrelevance'
Labelling irrelevance is the ontic counterpart to (A) and its violation is the counter-part to frequentism's reference class problem for the classical interpretation, ie bertrands paradox; where one wishes to preserve the same probability of the individual events (B)under relabeling whilst maintaining a uniform metric (A) (PI) ; that is without this changing the uniform metric, and vice versa. Ie maintainging (B) without having the invariance of the uniform metric being violated to ensure ie that when one looks at the reference of tables produced with side length n, in an uncountable number of them, that if looks at n^2, their area, that one has to ascribe a non uniform metric (violating (A)) so that the probabilities densities of the side lengths remain the same, or one keeps (A), and the density remains uniform (but the individual probability densities of that side length change, as its now a uniform measure over N^2).
having to change of the metric) whilst at the same time trying to avoiding issues normalization or countable additivity. Its an effort to get around bertrand's paradox, on the one hand and issues pertaining to countable addivitity and normalization on the other, as far I can gather
And when there are countably infinite many events all with the same probability (much worse when there uncountably many) one can cannot,have a discrete uniform or equal probability for each; whilst if there are un-countably many events, not only do do all have probability zero, and other issues arise due to bertrands paradox (when one now has to consider finite partitions, or reference classes, and the uniform measure over that set of events is not preserved.
'The Uniform measure/or principal of indifference, or translsation in-variance' is preserved, (preservation of the uniform metric) often at the expense of labelling irrelevance,(the probabilities of the actual events or reference classes remaining the same under relabelling). And there often a fundamental conflict between the two .
Different event might get ascribed different probabilities, and thus the probability of the events are not ontic/ labelling irrelevant or invariant, at the expense of maintainng the uniform measure/Principal of indifference, under translation-that all events have the same probability or a uniform metric, translation in-variance.
One can maintain it, but (the uniform metric) but how it gets distributed amongst the events (or the probability density of the actual events changes; aka betrands paradox).