Is it possible to combine two 8 bits DACs together to create a 16 bit DAC, one byte of the 16 bit word shall be sent to each of them

It's possible, but it won't work well.

Firstly, there is the problem of combining the two outputs, with one scaled precisely 1/256 of the other. (Whether you attenuate one by 1/256, amplify the other by 256, or some other arrangement, *16 and /16 for example, doesn't matter).

The big problem however is that an 8-bit DAC is likely to be accurate to something better than 8 bits : it may have a "DNL" specification of 1/4 LSB and an "INL" specification of 1/2LSB. These are the "Differential" and "Integral" nonlinearity specifications, and are a measure of how large each step between adjacent codes really is. (DNL provides a guarantee between any two adjacent codes, INL between any two codes across the full range of the DAC).

Ideally, each step would be precisely 1/256 of the full scale value; but a 1/4LSB DNL specification indicates that adjacent steps may differ from that ideal by 25% - this is normally acceptable behaviour in a DAC.

The trouble is that an 0.25 LSB error in your MSB DAC contributes a 64 LSB error (1/4 of the entire range) in your LSB DAC!

In other words, your 16 bit DAC has the linearity and distortion of a 10 bit DAC, which for most applications of a 16 bit DAC, is unacceptable.

Now if you can find an 8-bit DAC that guarantees 16-bit accuracy (INL and DNL better than 1/256 LSB) then go ahead : however they aren't economic to make, so the only way to get one is to start with a 16-bit DAC!

Another answer suggests "software compensation" ... mapping out the exact errors in your MSB DAC and compensating for them by adding the inverse error to the LSB DAC : something long pondered by audio engineers in the days when 16-bit DACs were expensive...

In short, it can be made to work to some extent, but if the 8-bit DAC drifts with temperature or age (it probably wasn't designed to be ultra-stable), the compensation is no longer accurate enough to be worth the complexity and expense.


An 8 bit DAC can output \$2^8= 256\$ different values.

A 16 bit DAC can output \$2^{16} = 65536\$ different values.

Note how that multiplies, it is not an addition (as would happen when you sum the outputs of two 8 bit DACs).

If I would take two 8 bit DACs and sum their outputs, what are the possible values ?

Answer: 0, 1, 2, ..., 256, 257, 258, ....511, 512 and that's it !

A 16 bit DAC can do 0,1,2 ...,65535, 65536 that's a lot more !

Theoretically is is possible but then you will need to multiply the output of one of the 8 bit DACs by exactly 256 and connect the LSB bits to the 1x DAC and the MSB bits to the 256x DAC. But don't be surprised if accuracy and linearity suffers !


The technique is workable if the full scale voltage of the "inner" DAC is larger than the step size of the outer DAC, and one has a means of accurately (though not necessarily quickly) measuring the output voltages generated by different output codes and applying suitable linearity adjustments in software. If the full-scale voltage of the inner DAC might be less than the worst-case step size between two voltages on the outer DAC (bearing in mind that the steps are seldom absolutely perfectly uniform) there may be voltages that cannot be obtained with any combination of inner- and outer-DAC values. If one ensures that there is overlap in the ranges, however, then using software linearity correction can enable good results.

BTW, the old Cypress PSOC chip design (I don't know about newer ones) emulates an nine-bit DAC using two six-bit DACs which are scaled relative to each other. It doesn't use software linearity correction, but it's only trying to add three bits of precision to a six-bit DAC. Trying to add more than 3-4 bits of precision to any kind of DAC without using software compensation is likely not to work very well.