If an integer is signed by default, why does the signed keyword exist?
There are at least two places where the signed
keyword is not a no-op:
With
char
: the signedness of "plain"char
is implementation-defined. On implementations where it's an unsigned type,signed char
is needed to get the signed variant. Even ifchar
is a signed type,signed char
,char
, andunsigned char
are all distinct types.With bitfields: bitfield members without explicit signedness have implementation-defined signedness. For example, in
struct foo { int b:1; };
the range of values of
b
may be { -1, 0 } or { 0, 1 } depending on the implementation. If you want to be sure you get the signed version, you need thesigned
keyword. Note that while the standard is not very clear on this, on popular implementations, this applies totypedef
too: if the bitfield member uses atypedef
-defined type that doesn't include explicit signedness, the implementation-defined signedness (on GCC, set by-fsigned-bitfields
) applies there too. This means types likeint32_t
should be defined using thesigned
keyword to avoid really bad surprise behavior when they're use in bitfields.
char
is either signed or unsigned, but in any case it is a type distinct from unsigned char
and signed char
. Those three are different types:
char
signed char
unsigned char
If not with signed
there would be some other way needed to distinguish them.
Even without char
. Why not? It allows to be explicit:
signed int x; // Someone decided that x
// must be signed
int y; // Did the author choose signed
// consciously? We cannot tell.