Binary representation of a .NET Decimal
https://www.csharpindepth.com/Articles/Decimal
How is a decimal stored?
A decimal is stored in 128 bits, even though only 102 are strictly necessary. It is convenient to consider the decimal as three 32-bit integers representing the mantissa, and then one integer representing the sign and exponent. The top bit of the last integer is the sign bit (in the normal way, with the bit being set (1) for negative numbers) and bits 16-23 (the low bits of the high 16-bit word) contain the exponent. The other bits must all be clear (0). This representation is the one given by
decimal.GetBits(decimal)
which returns an array of 4 ints.
Decimal.GetBits
for the information you want.
Basically it's a 96 bit integer as the mantissa, plus a sign bit, plus an exponent to say how many decimal places to shift it to the right.
So to represent 3.261 you'd have a mantissa of 3261, a sign bit of 0 (i.e. positive), and an exponent of 3. Note that decimal isn't normalized (deliberately) so you can also represent 3.2610 by using a mantissa of 32610 and an exponent of 4, for example.
I have some more information in my article on decimal floating point.