Generate distribution of file sizes from the command prompt

This seems to work pretty well:

find . -type f -print0 | xargs -0 ls -l | awk '{size[int(log($5)/log(2))]++}END{for (i in size) printf("%10d %3d\n", 2^i, size[i])}' | sort -n

Its output looks like this:

         0   1
         8   3
        16   2
        32   2
        64   6
       128   9
       256   9
       512   6
      1024   8
      2048   7
      4096  38
      8192  16
     16384  12
     32768   7
     65536   3
    131072   3
    262144   3
    524288   6
   2097152   2
   4194304   1
  33554432   1
 134217728   4
where the number on the left is the lower limit of a range from that value to twice that value and the number on the right is the number of files in that range.


Based on garyjohn's answer, here is a one-liner, which also formats the output to human readable:

find . -type f -print0 | xargs -0 ls -l | awk '{ n=int(log($5)/log(2)); if (n<10) { n=10; } size[n]++ } END { for (i in size) printf("%d %d\n", 2^i, size[i]) }' | sort -n | awk 'function human(x) { x[1]/=1024; if (x[1]>=1024) { x[2]++; human(x) } } { a[1]=$1; a[2]=0; human(a); printf("%3d%s: %6d\n", a[1],substr("kMGTEPYZ",a[2]+1,1),$2) }'

Here is the expanded version of it:

find . -type f -print0                                                   \ 
 | xargs -0 ls -l                                                        \
 | awk '{ n=int(log($5)/log(2));                                         \
          if (n<10) n=10;                                                \
          size[n]++ }                                                    \
      END { for (i in size) printf("%d %d\n", 2^i, size[i]) }'           \
 | sort -n                                                               \ 
 | awk 'function human(x) { x[1]/=1024;                                  \
                            if (x[1]>=1024) { x[2]++;                    \
                                              human(x) } }               \
        { a[1]=$1;                                                       \ 
          a[2]=0;                                                        \
          human(a);                                                      \
          printf("%3d%s: %6d\n", a[1],substr("kMGTEPYZ",a[2]+1,1),$2) }' 

In the first awk I defined a minimum file size to collect all the files less than 1kb to one place. In the second awk, function human(x) is defined to create a human readable size. This part is based on one of the answers here: https://unix.stackexchange.com/questions/44040/a-standard-tool-to-convert-a-byte-count-into-human-kib-mib-etc-like-du-ls1

The sample output looks like:

  1k:    335
  2k:     16
 32k:      5
128k:     22
  1M:     54
  2M:     11
  4M:     13
  8M:      3

Try this:

find . -type f -exec ls -lh {} \; | 
 gawk '{match($5,/([0-9.]+)([A-Z]+)/,k); if(!k[2]){print "1K"} \
        else{printf "%.0f%s\n",k[1],k[2]}}' | 
sort | uniq -c | sort -hk 2 

OUTPUT :

 38 1K
 14 2K
  1 30K
  2 62K
  12 2M
  2 3M
  1 31M
  1 46M
  1 56M
  1 75M
  1 143M
  1 191M
  1 246M
  1 7G

EXPLANATION :

  • find . -type f -exec ls -lh {} \; : simple enough, find files in the current dir and run ls -lh on them

  • match($5,/([0-9.]+)([A-Z]+)/,k); : this will extract the file size, and save each match into the array k.

  • if(!k[2]){print "1K"} : if k[2] is undefined the file size is <1K. Since I am imagining you don't care about such tiny sizes, the script will print 1K for all files whose size is <=1K.

  • else{printf "%.0f%s\n",k[1],k[2]} : if the file is larger than 1K, round the file size to the closest integer and print along with its modifier (K,M, or G).

  • sort | uniq -c : count the occurrences of each line (file size) printed.

  • sort -hk 2 : sort according to the second field in human readable format. This way, 7G is sorted after 8M.