The name of 16 and 32 bits
A byte is the smallest unit of data that a computer can work with. The C language defines char
to be one "byte" and has CHAR_BIT
bits. On most systems this is 8 bits.
A word on the other hand, is usually the size of values typically handled by the CPU. Most of the time, this is the size of the general-purpose registers. The problem with this definition, is it doesn't age well.
For example, the MS Windows WORD
datatype was defined back in the early days, when 16-bit CPUs were the norm. When 32-bit CPUs came around, the definition stayed, and a 32-bit integer became a DWORD
. And now we have 64-bit QWORD
s.
Far from "universal", but here are several different takes on the matter:
Windows:
BYTE
- 8 bits, unsignedWORD
- 16 bits, unsignedDWORD
- 32 bits, unsignedQWORD
- 64 bits, unsigned
GDB:
- Byte
- Halfword (two bytes).
- Word (four bytes).
- Giant words (eight bytes).
<stdint.h>
:
uint8_t
- 8 bits, unsigneduint16_t
- 16 bits, unsigneduint32_t
- 32 bits, unsigneduint64_t
- 64 bits, unsigneduintptr_t
- pointer-sized integer, unsigned
(Signed types exist as well.)
If you're trying to write portable code that relies upon the size of a particular data type (e.g. you're implementing a network protocol), always use <stdint.h>
.
The correct name for a group of exactly 8 bits is really an octet. A byte may have more than or fewer than 8 bits (although this is relatively rare).
Beyond this there are no rigorously well-defined terms for 16 bits, 32 bits, etc, as far as I know.