Why are flag enums usually defined with hexadecimal values

Because [Flags] means that the enum is really a bitfield. With [Flags] you can use the bitwise AND (&) and OR (|) operators to combine the flags. When dealing with binary values like this, it is almost always more clear to use hexadecimal values. This is the very reason we use hexadecimal in the first place. Each hex character corresponds to exactly one nibble (four bits). With decimal, this 1-to-4 mapping does not hold true.


I think it's just because the sequence is always 1,2,4,8 and then add a 0.
As you can see:

0x1 = 1 
0x2 = 2
0x4 = 4
0x8 = 8
0x10 = 16
0x20 = 32
0x40 = 64
0x80 = 128
0x100 = 256
0x200 = 512
0x400 = 1024
0x800 = 2048

and so on, as long as you remember the sequence 1-2-4-8 you can build all the subsequent flags without having to remember the powers of 2


It makes it easy to see that these are binary flags.

None  = 0x0,  // == 00000
Flag1 = 0x1,  // == 00001
Flag2 = 0x2,  // == 00010
Flag3 = 0x4,  // == 00100
Flag4 = 0x8,  // == 01000
Flag5 = 0x10  // == 10000

Though the progression makes it even clearer:

Flag6 = 0x20  // == 00100000
Flag7 = 0x40  // == 01000000
Flag8 = 0x80  // == 10000000

Rationales may differ, but an advantage I see is that hexadecimal reminds you: "Okay, we're not dealing with numbers in the arbitrary human-invented world of base ten anymore. We're dealing with bits - the machine's world - and we're gonna play by its rules." Hexadecimal is rarely used unless you're dealing with relatively low-level topics where the memory layout of data matters. Using it hints at the fact that that's the situation we're in now.

Also, i'm not sure about C#, but I know that in C x << y is a valid compile-time constant. Using bit shifts seems the most clear:

[Flags]
public enum MyEnum
{
    None  = 0,
    Flag1 = 1 << 0,  //1
    Flag2 = 1 << 1,  //2
    Flag3 = 1 << 2,  //4
    Flag4 = 1 << 3,  //8
    Flag5 = 1 << 4   //16
}