How to gather byte occurrence statistics in binary file?

With GNU od:

od -vtu1 -An -w1 my.file | sort -n | uniq -c

Or more efficiently with perl (also outputs a count (0) for bytes that don't occur):

perl -ne 'BEGIN{$/ = \4096};
          $c[$_]++ for unpack("C*");
          END{for ($i=0;$i<256;$i++) {
              printf "%3d: %d\n", $i, $c[$i]}}' my.file

For large files using sort will be slow. I wrote a short C program to solve the equivalent problem (see this gist for Makefile with tests):

#include <stdio.h>

#define BUFFERLEN 4096

int main(){
    // This program reads standard input and calculate frequencies of different
    // bytes and present the frequences for each byte value upon exit.
    //
    // Example:
    //
    //     $ echo "Hello world" | ./a.out
    //
    // Copyright (c) 2015 Björn Dahlgren
    // Open source: MIT License

    long long tot = 0; // long long guaranteed to be 64 bits i.e. 16 exabyte
    long long n[256]; // One byte == 8 bits => 256 unique bytes

    const int bufferlen = BUFFERLEN;
    char buffer[BUFFERLEN];
    int i;
    size_t nread;

    for (i=0; i<256; ++i)
        n[i] = 0;

    do {
        nread = fread(buffer, 1, bufferlen, stdin);
        for (i = 0; i < nread; ++i)
            ++n[(unsigned char)buffer[i]];
        tot += nread;
    } while (nread == bufferlen);
    // here you may want to inspect ferror of feof

    for (i=0; i<256; ++i){
        printf("%d ", i);
        printf("%f\n", n[i]/(float)tot);
    }
    return 0;
}

usage:

gcc main.c
cat my.file | ./a.out

As mean, sigma and CV are often important when judging statistic data of the content of binary files, I've created a cmdline program that graphs all this data as an ascii circle of byte deviations from sigma.
http://wp.me/p2FmmK-96
It can be used with grep, xargs and other tools to extract statistics. enter image description here