Her er noe mer info for dere 'bit-fiklere'
Ted Smith said;
@Frode
Howdy
My wife pointed out to me that my last post was a little curt. I guess that given the topic of this thread it was especially so. Sorry.
If there's something specific that I can answer I will (and I think I have been).
The literal 'question' was "…he would like to see some firm proof (based on PSA marketing statements) that a 1-bit DAC (DS) does wonders that a multi-bit DAC cannot do (or outperform for that matter)."
I assumed that "based on PSA marketing statements" was a typo and he meant "not just based on PSA marketing statements".
I don't think demands like "Prove that if you could spend an infinite amount of money another approach can't work better" are productive, which is why I answered as I did.
The implied point is that the paper is a proof that a multibit dac does wonders that a single bit DAC cannot do. It isn't:
From the abstract:
"Single-stage, 1-bit sigma-delta converters are in principle unperfectible. We prove this fact. The reason, simply stated, is that, when properly dithered, they are in constant overload. The consequence is that distortion, limit cycles, instability, and noise modulation can never be totally avoided. Recording, editing, or storage systems based upon single-stage, 1-bit sigma-delta conversion, and in particular professional systems using this type of conversion, are thus a bad idea."
Since my implementation isn't a perfect single stage one bit sigma delta converter does any of it apply?
As an aside: What part of "Recording, editing, or storage systems" applies to a DAC?
The paper is based on a linearized model of single staged one bit sigma delta converters. We know that the linear model of a sigma delta modulator tho more tractable than reality, isn't a very good model, so proofs based on it aren't necessarily correct (or incorrect) for the real world.
Single bit systems ARE inherently linear.
Getting PERFECT linearity from a multi-bit DAC isn't technically possible, but we can choose how close we get with money and work.
Getting PERFECT results from a one-bit DAC isn't necessarily needed either – we can choose how close we get with money and work – my money and work is in the FPGA.
To me the most powerful proof is an existence proof: At the pragmatic level I specifically instrument the sigma delta modulator for overloads – and take appropriate action should they occur. With valid inputs they don't.
-Ted
Frode said;
No problem – not to worry (no insult taken at all).
I am very happy with your prompt replies.
Some follow-up (it might happen that something is lost in translation, but here it goes):
Are you using two-level ZOH-reconstruction prior to low-pass filtering?
Do you consent to a statement saying that one pico second in fall/rise time or jitter will utterly destroy the linearity in a 1-bit DAC whereas a multi-bit DAC will be rather insensitive to this?
Could practical implementation issues be the reason for the linearity challenges, i.e. it is not very successful if flawed or imperfect?
Ted Smith said;
I'm not quite sure what you are getting at with the question "Are you using two-level ZOH-reconstruction prior to low-pass filtering?" – In the ideal my digital output prior to the low-pass filtering is perfectly rectangular – which is two level ZOH – But with the large oversampling ratios involved, this isn't a practical frequency response problem – besides it measures +/- 0.01dB from 20Hz to 20kHz.
At a given level of jitter 7 level multibit has an advantage of about 18dB over 2 level. So yes multibit has an advantage, but it's not huge and getting the level of jitter needed for 120dB S/N for a one bit DAC is entirely doable.
For a multibit DAC getting components accurate enough to support 120dB S/N is work, but also doable.
The issue is a practical one: whether it's easier to achieve the necessary component accuracy for multibit or to achieve the necessary low frequency phase noise for single bit.
The standard way of handling the component accuracy problems for a multibit design is to use multiple multibit DACs which aren't linear enough (because of component accuracy problems) and "fixing" the linearity problems by selecting which one to use for each sample at random to average out their non-linearities. At best this is only statistically linear and one could argue that using one bit and having a lower phase noise clock is a much "purer" technical solution to the problem.