Compressing a set of large integers
If the integers are random, unrelated, and really follow an uniform distribution law over [0, 2³²-1[, it can probably be demonstrated that you can't compress the array from the trivial representation. Did I miss something in your question ?
For non random numbers arrays, I usually use a simple deflate. This is a commonly used algorithm because it is good on general, not totally random, arrays. The fact that you have good libraries with adjustable compression level in all major languages is of course another advantage.
I use deflate to compress small arrays (about 300 to 2000 32 bits integers) of physical sensor measurements and get a 70% gain, but that's because successive sensor measurements are rarely very different.
It probably won't be easy to find a notably better algorithm suited to all situations. Most improvements would come from specificities of your number series.
You may also note that you'd have a better compression gain by compressing many sets together. Of course this may be very inconvenient, depending on your application.
Is the subject is still open ?
I am currently working on it.
(PS: I am a game creator and not a mathematician)
Can't sleep well since weeks because I wondering why we don't use A^B+C variant (or other) to compress images and information.
My utopia goal is to compress an number of 4.600.000 digit by using the less possible combination of A^B+C formula created from the GPU of the computer. Basically I try to do that because it would allow to store/stream small image under ( <100 character) without lost of quality at 30 fps through the Wifi and without killing the bandwidth.
My realistic goal is to compress a 200 digit to <5 character.
PS: To do that I already created the "Base Chinais"
If you want to use it :
- https://github.com/EloiStree/2019_09_19_MathCompressionOfImage/wiki/SouthChinais
- https://gitlab.com/eloistree/2019_09_06_UnicodeBasedId
Base(Chinais) 䶯 = 38727
It allows to transform 2307^200+32450 in 碸^災+㔩
If you try to use is raw to compress BigInteger the base Chinais offert 4-4.5x of
compression:
1413546486463454579816416416416462324833676542
4钉澻둲觋㷬乮䄠櫡䒤갱
So now I need to compress <200 digit to 9999^9999+99999999
If you have any idea or alternative to A^B+C feel free to warn me.
I am spending lot's of time on it experimenting through Unity3D.
I will post what I found on the sujet here:
https://github.com/EloiStree/2019_09_19_MathCompressionOfImage/wiki
Hope it will helps the next people falling here.
Find me on Discord if you want to talk about it.
https://eloistree.page.link/discord
You can get an idea of the best you can do by counting. (I wish stackoverflow allowed TeX equations like math.stackexchange. Anyway ...)
ceiling(log(Combination(2^32,1000)) / (8 * log(2))) = 2934
So if, as you say, the choices are uniformly distributed, the best compression you could hope for on average for that particular case is 2934 bytes. The best ratio is 73.35% of the unencoded representation of 4000 bytes.
Combination(2^32,1000)
is simply the total number of possible inputs to the compression algorithm. If those are uniformly distributed, then the optimal coding is one giant integer that identifies each possible input by an index. Each giant integer value uniquely identifies one of the inputs. Imagine looking up the input by index in a giant table. ceiling(log(Combination(2^32,1000)) / log(2))
is how many bits you need for that index integer.
Update:
I found a way to get close to the theoretical best using off-the-shelf compression tools. I sort, apply delta coding, and subtract one from that (since the delta between successive distinct elements is at least one). Then the trick is that I write out all the high bytes, then the next most significant bytes, etc. The high bytes of the deltas minus one tend to be zero, so that groups a lot of zeros together, which the standard compression utilities love. Also the next set of bytes tend to be biased to low values.
For the example (1000 uniform and distinct samples from 0..2^32-1), I get an average of 3110 bytes when running that through gzip -9
, and 3098 bytes through xz -9
(xz uses the same compression, LZMA, as 7zip). Those are pretty close to the theoretical best average of 2934. Also gzip has an overhead of 18 bytes, and xz has an overhead of 24 bytes, both for headers and trailers. So a fairer comparison with the theoretical best would be 3092 for gzip -9
and 3074 for xz -9
. Around 5% larger than the theoretical best.
Update 2:
I implemented direct encoding of the permutations, and achieved an average of 2974 bytes, which is only a little over 1% more than the theoretical best. I used the GNU multiple precision arithmetic library to encode an index for each permutation in a giant integer. The actual code for the encoding and decoding is shown below. I added comments for the mpz_*
functions where it may not be obvious from the name what arithmetic operations they're doing.
/* Recursively code the members in set[] between low and high (low and high
themselves have already been coded). First code the middle member 'mid'.
Then recursively code the members between low and mid, and then between mid
and high. */
local void combination_encode_between(mpz_t pack, mpz_t base,
const unsigned long *set,
int low, int high)
{
int mid;
/* compute the middle position -- if there is nothing between low and high,
then return immediately (also in that case, verify that set[] is sorted
in ascending order) */
mid = (low + high) >> 1;
if (mid == low) {
assert(set[low] < set[high]);
return;
}
/* code set[mid] into pack, and update base with the number of possible
set[mid] values between set[low] and set[high] for the next coded
member */
/* pack += base * (set[mid] - set[low] - 1) */
mpz_addmul_ui(pack, base, set[mid] - set[low] - 1);
/* base *= set[high] - set[low] - 1 */
mpz_mul_ui(base, base, set[high] - set[low] - 1);
/* code the rest between low and high */
combination_encode_between(pack, base, set, low, mid);
combination_encode_between(pack, base, set, mid, high);
}
/* Encode the set of integers set[0..num-1], where each element is a unique
integer in the range 0..max. No value appears more than once in set[]
(hence the name "set"). The elements of set[] must be sorted in ascending
order. */
local void combination_encode(mpz_t pack, const unsigned long *set, int num,
unsigned long max)
{
mpz_t base;
/* handle degenerate cases and verify last member <= max -- code set[0]
into pack as simply itself and set base to the number of possible set[0]
values for coding the next member */
if (num < 1) {
/* pack = 0 */
mpz_set_ui(pack, 0);
return;
}
/* pack = set[0] */
mpz_set_ui(pack, set[0]);
if (num < 2) {
assert(set[0] <= max);
return;
}
assert(set[num - 1] <= max);
/* base = max - num + 2 */
mpz_init_set_ui(base, max - num + 2);
/* code the last member of the set and update base with the number of
possible last member values */
/* pack += base * (set[num - 1] - set[0] - 1) */
mpz_addmul_ui(pack, base, set[num - 1] - set[0] - 1);
/* base *= max - set[0] */
mpz_mul_ui(base, base, max - set[0]);
/* encode the members between 0 and num - 1 */
combination_encode_between(pack, base, set, 0, num - 1);
mpz_clear(base);
}
/* Recursively decode the members in set[] between low and high (low and high
themselves have already been decoded). First decode the middle member
'mid'. Then recursively decode the members between low and mid, and then
between mid and high. */
local void combination_decode_between(mpz_t unpack, unsigned long *set,
int low, int high)
{
int mid;
unsigned long rem;
/* compute the middle position -- if there is nothing between low and high,
then return immediately */
mid = (low + high) >> 1;
if (mid == low)
return;
/* extract set[mid] as the remainder of dividing unpack by the number of
possible set[mid] values, update unpack with the quotient */
/* div = set[high] - set[low] - 1, rem = unpack % div, unpack /= div */
rem = mpz_fdiv_q_ui(unpack, unpack, set[high] - set[low] - 1);
set[mid] = set[low] + 1 + rem;
/* decode the rest between low and high */
combination_decode_between(unpack, set, low, mid);
combination_decode_between(unpack, set, mid, high);
}
/* Decode from pack the set of integers encoded by combination_encode(),
putting the result in set[0..num-1]. max must be the same value used when
encoding. */
local void combination_decode(const mpz_t pack, unsigned long *set, int num,
unsigned long max)
{
mpz_t unpack;
unsigned long rem;
/* handle degnerate cases, returning the value of pack as the only element
for num == 1 */
if (num < 1)
return;
if (num < 2) {
/* set[0] = (unsigned long)pack */
set[0] = mpz_get_ui(pack);
return;
}
/* extract set[0] as the remainder after dividing pack by the number of
possible set[0] values, set unpack to the quotient */
mpz_init(unpack);
/* div = max - num + 2, set[0] = pack % div, unpack = pack / div */
set[0] = mpz_fdiv_q_ui(unpack, pack, max - num + 2);
/* extract the last member as the remainder after dividing by the number
of possible values, taking into account the first member -- update
unpack with the quotient */
/* rem = unpack % max - set[0], unpack /= max - set[0] */
rem = mpz_fdiv_q_ui(unpack, unpack, max - set[0]);
set[num - 1] = set[0] + 1 + rem;
/* decode the members between 0 and num - 1 */
combination_decode_between(unpack, set, 0, num - 1);
mpz_clear(unpack);
}
There are mpz_*
functions for writing the number to a file and reading it back, or exporting the number to a specified format in memory, and importing it back.