The most challenging part of designing with a voltage-out DAC is to really know how fast one can run this animal within its specified accuracy.
A large range of equipment use digital-to-analog converters (DACs) for various functions. Common applications for the precision, voltage-out DACs are instrumentation, automatic test, and test/measurement equipment. In these applications, the DAC creates DC voltages or arbitrary waveforms.
With these circuits, the most challenging part of designing with a voltage-out DAC is to really know how fast one can run this animal within its specified accuracy. If you have a device that has a clock frequency of 50 MHz, what does that really mean in terms of the voltage-out update speed or is there more to this question than just knowing the clock frequency?
The voltage-out DAC operates in a FIFO manner, that is first-in first-out (Figure 1). Typically, the user loads the DAC’s input digital data (DIN) into the DAC internal serial input register, while the previous data code latches to the N-bit DAC.
Figure 1 Generic block diagram of a precision, voltage-out DAC
With the LDAC (Load DAC) pin high, the serial data stream in combination with SCLK (serial clock) loads the DAC’s serial input register (Figure 2). Once the input register is full, a low LDAC stores the serial input register into the N-bit data latch. LDAC again goes high and the analog output voltage starts to appear on the OUT pin as it starts to settle to the final value. During this settling time, the serial input register accepts the next code.
Figure 2 Generic timing diagram of a precision, voltage-out DAC
In a perfect world, the theoretical throughput speed of the DAC is equal to SCLK /N where SCLK is the external clock rate and N is the number of DAC bits. For instance, with a 16-bit DAC, that has a maximum clock rate of 50 MHz, the throughput rate would be 50 MHz/16 or 3.125 MHz.
This is a great DAC throughput rate; however, it is very unrealistic especially if you are programming in analog output voltages with full scale or rail-to-rail output swings. In this situation, you need to allow time for the output to settle to its full value.
Settling time controls all
So, let’s get realistic. The settling time in high-accuracy applications determines the DAC’s effective update rate; not the clock’s data rate. The DAC’s analog output frequency structure is usually first-order. For large signals, you can easily model this type of response with an R/C circuit. For this type of circuit, the following formula describes the analog input/output behavior.
Figure 3 shows the analog settling time response of the DAC system having deadtime, slew, and linear settling sections.
Figure 3 DAC theoretical output settling time
The deadtime is the time that the DAC uses to update the analog output from the data latch register. If a large analog input step occurs, the DAC will enter the slew region. And at the end of this signal response, the final value is ±1/2 LSB of the theoretical final value.
The DAC’s product datasheet will list the settling time specification. For example, with the MAX5717, 16-bit, 50 MHz, voltage-out DAC, the settling time is 0.75 ms. At first glance, one would guess that the DAC’s throughput rate is 50 MHz divided by 16 or 2.33 MHz. If you put the DAC’s settling into the mix, the realistic throughput rate for this DAC is the inverse of the settling time or 1.33 MHz.
First things first
The large range of equipment that uses DACs require DAC behavior optimization that is dependent on the system’s requirement. For the instrumentation, automatic test and test/measurement applications, the throughput rate is a primary specification. Details of the DAC precision performance is very important, but remember settling time is the best specification that will give you immediate insight into whether the DAC is fast enough for your circuit.
Bonnie Baker has been working with analog and digital designs and systems for more than 30 years and is writing this blog on behalf of Maxim Integrated.