What is the difference between a kibibyte, a kilobit, and a kilobyte?
1 KiB (Kibibyte) = 1,024 B (Bytes) (2^10 Bytes)
1 kb (Kilobit) = 125 B (Bytes) (10^3 Bits ÷ (8 bits / byte) = 125 B)
1 kB (Kilobyte) = 1,000 B (Bytes) (10^3 Bytes)
It's the same way with any SI prefix; k
(1x103), M
(1x106), G
(1x109), so, by extension:
1 MiB (Mebibyte) = 1,048,576 B (Bytes) (2^20 Bytes)
1 Mb (Megabit) = 125,000 B (Bytes) (10^6 Bits ÷ (8 bits / byte) = 125,000 B)
1 MB (Megabyte) = 1,000,000 B (Bytes) (10^6 Bytes)
The only ones that are a bit different are the IEC Binary Prefixes (kibi/mebi/gibi etc.), because they are in base 2, not base 10 (e.g. all numbers equal 2something instead of 10something). I prefer to just use the SI prefixes because I find it to be a lot easier. Plus, Canada (my country) uses the metric system, so I'm used to, for instance 1kg = 1000g
(or 1k anything = 1000 base things
). None of these are wrong or right; just make sure you know which one you're using and what it really equates to.
To appease the commenters:
1 Byte (B) = 2 nibbles = 8 bits (b)
This is why, if you've ever taken a look in a hex editor, everything is split into two hexadecimal characters; each hex character is the size of a nibble, and there are two to a byte. For instance:
198 (decimal) = C6 (hex) = 11000110 (bits)
There are a few basic terms that are simple and easy to understand:
* A bit (b) is the smallest unit of data comprised of just {0,1}
* 1 nibble (-) = 4 bits (cutesy term with limited usage; mostly bitfields)
* 1 byte (B) = 8 bits (you could also say 2 nibbles, but that’s rare)
To convert between bits and bytes (with any prefix), just multiple or divide by eight; nice and simple.
Now, things get a little more complicated because there are two systems of measuring large groups of data: decimal and binary. For years, computer programmers and engineers just used the same terms for both, but the confusion eventually evoked some attempts to standardize a proper set of prefixes.
Each system uses a similar set of prefixes that can be applied to either bits or bytes. Each prefixes start the same in both systems, but the binary ones sound like baby-talk after that.
The decimal system is base-10 which most people are used to and comfortable using because we have 10 fingers. The binary system is base-2 which most computers are used to and comfortable using because they have two voltage states.
The decimal system is obvious and easy to use for most people (it’s simple enough to multiply in our heads). Each prefix goes up by 1,000 (the reason for that is a whole different matter).
The binary system is much harder for most non-computer people to use, and even programmers often can’t multiple arbitrarily large numbers in their heads. Nevertheless, it’s a simple matter of being multiples of two. Each prefix goes up by 1,024. One “K” is 1,024 because that is the closest power of two to the decimal “k” of 1,000 (this may be true at this point, but the difference rapidly increases with each successive prefix).
The numbers are the same for bits and bytes that have the same prefix.
* Decimal:
* 1 kilobyte (kB) = 1,000 B = 1,000^1 B 1,000 B
* 1 megabyte (MB) = 1,000 KB = 1,000^2 B = 1,000,000 B
* 1 gigabyte (GB) = 1,000 MB = 1,000^3 B = 1,000,000,000 B
* 1 kilobit (kb) = 1,000 b = 1,000^1 b 1,000 b
* 1 megabit (Mb) = 1,000 Kb = 1,000^2 b = 1,000,000 b
* 1 gigabit (Gb) = 1,000 Mb = 1,000^3 b = 1,000,000,000 b
* …and so on, just like with normal Metric units meters, liters, etc.
* each successive prefix is the previous one multiplied by 1,000
* Binary:
* 1 kibibyte (KiB) = 1,024 B = 1,024^1 B 1,024 B
* 1 mebibyte (MiB) = 1,024 KB = 1,024^2 B = 1,048,576 B
* 1 gibibyte (GiB) = 1,024 MB = 1,024^3 B = 1,073,741,824 B
* 1 kibibit (Kib) = 1,024 b = 1,024^1 b = 1,024 b
* 1 mebibit (Mib) = 1,024 Kb = 1,024^2 b = 1,048,576 b
* 1 gibibit (Gib) = 1,024 Mb = 1,024^3 b = 1,073,741,824 b
* …and so on, using similar prefixes as Metric, but with funny, ebi’s and ibi’s
* each successive prefix is the previous one multiplied by 1,024
Notice that the difference between the decimal and binary system starts small (at 1K, they’re only 24 bytes, or 2.4% apart), but grows with each level (at 1G, they are >70MiB, or 6.9% apart).
As a general rule of thumb, hardware devices use decimal units (whether bits or bytes) while software uses binary (usually bytes).
This is the reason that some manufacturers, particularly drive mfgs, like to use decimal units, because it makes the drive size sound larger, yet users get frustrated when they find it has less than they expected when they see Windows et. al. report the size in binary. For example, 500GB = 476GiB, so while the drive is made to contain 500GB and labeled as such, My Computer displays the binary 476GiB (but as “476GB”), so users wonder where the other 23GB went. (Drive manufacturers often add a footnote to packages stating that the “formatted size is less” which is misleading because the filesystem overhead is nothing compared to the difference between decimal and binary units.)
Networking devices often use bits instead of bytes for historical reasons, and ISPs often like to advertise using bits because it makes the speed of the connections they offer sound bigger: 12Mibps instead of just 1.5MiBps. They often even mix and match bits and bytes and decimal and binary. For example, you may subscribe to what the ISP calls a “12MBps” line, thinking that you are getting 12MiBps but actually just receive 1.43MiBps (12,000,000/8/1024/1024).