Meaning of "i" in "MiB"?

There are two ways (in common use) of denoting orders of magnitude to make large numbers easier to read, first you can use a power of 10.

10⁰ = 1
10¹ = 10
10² = 100
10³ = 1000

Or powers of two

2⁰ = 1
2¹ = 2
2² = 4
2³ = 8

Using these series as a base we arrive at the numbers 1000 and 1024 (10³ and 2¹⁰) for a kilo.

There are eight bits to a byte. So one kilobyte is 8×10³ = 8000 bits. Hard drive manufacturers use this method. In computer science, people usually use powers of two, so one kibibyte is 8×2¹⁰ = 8192 bits.

The difference only gets larger as the numbers get larger. Some have even mixed those two systems to get nice numbers to put on their packaging. This is why a 1.44MB floppy disk has neither 1.44 megabytes nor 1.44 mebibytes (they use 1024×1000).

The logic behind the i is that the terms are derived from the original si prefixes, kilo, mega, giga, but with the word binary put in in. So the i is the second letter of binary. The mnemonic for the kibibyte is "kilo binary byte", and "KiB" is pronounced "Kibibyte".

All of this is defined in the IEC_80000 Standard.

Note that a mebibyte is not defined as 2²⁰, but as (210)2, although they are equal. A gibibyte is (210)3, a tebibyte is (210)4 and so on.

Prefix       Bytes                      Prefix       Bytes
1 Byte     = (2^10)^0 = 1               1 Byte     = (10^3)^0 = 1
1 Kibibyte = (2^10)^1 = 1024            1 Kilobyte = (10^3)^1 = 1000
1 Mebibyte = (2^10)^2 = 1048576         1 Megabyte = (10^3)^2 = 1000000
1 Gibibyte = (2^10)^3 = 1073741824      1 Gigabyte = (10^3)^3 = 1000000000
1 Tebibyte = (2^10)^4 = 1099511627776   1 Terabyte = (10^3)^4 = 1000000000000

Keep in mind that, very often, the term kilobyte is used when the author means kibibyte. The binary unit was only introduced around 1999, as Randy Orrison points out.


As nealmcb found out in the comments, there is an official policy on this:
https://wiki.ubuntu.com/UnitsPolicy

In summary, this policy reminds developers to either use SI or IEC prefixes, but to never mix them. It goes on to say:

For file sizes there are two possibilities:

  • Show both, base-10 and base-2 (in this order). An example is the Linux kernel: "2930277168 512-byte hardware sectors: (1.50 TB/1.36 TiB)"
  • Only show base-10, or give the user the opportunity to decide between base-10 and base-2 (the default must be base-10).

What does MiB stand for? In particular the "i"?

Since no one actually answered this: "MiB" stands for "megabinary byte", which can be abbreviated to "mebibyte" (though this sounds kind of stupid, and I'd rather just stick with saying "megabinary"). See the NIST explanation.

So the "i" comes from the word "binary".

There were other proposals to abbreviate these units in the past, but they all failed to gain traction:

  • κ = 1024, κ2 = 1024², κ3 = 1024³, ... (Greek letter kappa, hard to type)
  • KKB = 1024, MMB = 1024², GGB = 1024³, ... (could be misinterpreted as megamegabyte = TB)
  • bK = 1024, bK² = 1024², bK³ = 1024³, ... (when proposed, many computers didn't even have lowercase)
  • 1B10 = 1024, 1B20 = 1024², 1B30 = 1024³, ...
  • k₂B = 1024, M₂B = 1024², G₂B = 1024³, ...

it's an IEC standard prefix it means "by power of two"

2^10 = 1024 = Ki-

2^20 = 1048576 = Mi-

more details on

http://en.wikipedia.org/wiki/Kibi-#IEC_standard_prefixes

http://en.wikipedia.org/wiki/Mebibyte