Any reason to use byte/short etc.. in C#?

A single byte compared to a long won't make a huge difference memory-wise, but when you start having large arrays, these 7 extra bytes will make a big difference.

What's more is that data types help communicate developers' intent much better: when you encounter a byte length; you know for sure that length's range is that of a byte.


What I think this question is getting at is that 10+ years ago it was common practice to think about what values your variables needed to store and if, for example, you were storing a percentage (0..100) you might use a byte (-128 to 127 signed or 0 to 255 unsigned) as it was adequately large for the job and thus seen as less "wasteful".

These days however such measures are unnecessary. Memory isn't typically that much of a premium and if it were you'd probably be defeated by modern computers aligning things on 32 bit word boundaries (if not 64) anyway.

Unless you're storing arrays of thousands of these things then these kinds of micro-optimizations are (now) an irrelevant distraction.

Frankly I can't remember the last time I didn't use a byte for something other than raw data and I can't think of the last time I used a short for, well, anything.


There's a small performance loss when using datatypes that are smaller than the CPU's native word size. When a CPU needs to add two bytes together, it loads them in (32-bit)word sized registers, adds them, adjusts them (cuts off three most significant bytes, calculates carry/overflow) and stores them back in a byte.

That's a lot of work. If you're going to use a variable in a loop, don't make it smaller than the CPU's native word.

These datatypes exist so that code can handle structures that contain them, because of size constraint, or because of legacy APIs or what not.

Tags:

C#

Java

Types